Feb 18 19:38:34 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 19:38:34 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:34 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:38:35 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 19:38:35 crc kubenswrapper[5007]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:38:35 crc kubenswrapper[5007]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 19:38:35 crc kubenswrapper[5007]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:38:35 crc kubenswrapper[5007]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:38:35 crc kubenswrapper[5007]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 19:38:35 crc kubenswrapper[5007]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.703031 5007 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708700 5007 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708717 5007 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708721 5007 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708725 5007 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708729 5007 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708733 5007 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708736 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708740 5007 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708754 5007 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708759 5007 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708763 5007 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708767 5007 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708772 5007 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708777 5007 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708782 5007 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708785 5007 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708789 5007 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708793 5007 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708797 5007 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708800 5007 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708803 5007 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708807 5007 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708810 5007 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708814 5007 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708817 5007 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708821 5007 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708824 5007 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708828 5007 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708832 5007 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708835 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708838 5007 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708842 5007 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708845 5007 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708848 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708852 5007 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708855 5007 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708858 5007 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708862 5007 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708866 5007 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708871 5007 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708882 5007 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708888 5007 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708892 5007 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708897 5007 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708900 5007 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708904 5007 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708908 5007 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708911 5007 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708915 5007 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708918 5007 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708922 5007 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708926 5007 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708930 5007 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708934 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708938 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708942 5007 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708946 5007 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708949 5007 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708952 5007 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708956 5007 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708959 5007 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708964 5007 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708968 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708972 5007 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708976 5007 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708980 5007 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708984 5007 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708987 5007 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708990 5007 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708994 5007 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.708999 5007 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709091 5007 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709102 5007 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709111 5007 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709117 5007 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709122 5007 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709126 5007 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709132 5007 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709138 5007 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709145 5007 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709152 5007 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709158 5007 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709164 5007 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709170 5007 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709175 5007 flags.go:64] FLAG: --cgroup-root="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709180 5007 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709185 5007 flags.go:64] FLAG: --client-ca-file="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709190 5007 flags.go:64] FLAG: --cloud-config="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709195 5007 flags.go:64] FLAG: --cloud-provider="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709201 5007 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709209 5007 flags.go:64] FLAG: --cluster-domain="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709214 5007 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709219 5007 flags.go:64] FLAG: --config-dir="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709224 5007 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709229 5007 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709420 5007 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709444 5007 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709450 5007 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709456 5007 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709462 5007 flags.go:64] FLAG: --contention-profiling="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709467 5007 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709474 5007 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709480 5007 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709485 5007 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709491 5007 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709496 5007 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709502 5007 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709507 5007 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709512 5007 flags.go:64] FLAG: --enable-server="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709517 5007 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709526 5007 flags.go:64] FLAG: --event-burst="100" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709532 5007 flags.go:64] FLAG: --event-qps="50" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709537 5007 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709543 5007 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709548 5007 flags.go:64] FLAG: --eviction-hard="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709555 5007 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709561 5007 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709566 5007 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709571 5007 flags.go:64] FLAG: --eviction-soft="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709576 5007 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709581 5007 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709586 5007 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709591 5007 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709596 5007 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709601 5007 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709607 5007 flags.go:64] FLAG: --feature-gates="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709613 5007 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709618 5007 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709622 5007 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709626 5007 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709631 5007 flags.go:64] FLAG: --healthz-port="10248" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709635 5007 flags.go:64] FLAG: --help="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709639 5007 flags.go:64] FLAG: --hostname-override="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709643 5007 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709647 5007 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709651 5007 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709655 5007 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709659 5007 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709664 5007 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709668 5007 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709671 5007 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709675 5007 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709680 5007 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709684 5007 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709688 5007 flags.go:64] FLAG: --kube-reserved="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709692 5007 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709696 5007 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709700 5007 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709705 5007 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709709 5007 flags.go:64] FLAG: --lock-file="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709713 5007 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709718 5007 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709722 5007 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709728 5007 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709732 5007 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709737 5007 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709740 5007 flags.go:64] FLAG: --logging-format="text" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709744 5007 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709749 5007 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709753 5007 flags.go:64] FLAG: --manifest-url="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709757 5007 flags.go:64] FLAG: --manifest-url-header="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709764 5007 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709768 5007 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709773 5007 flags.go:64] FLAG: --max-pods="110" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709777 5007 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709782 5007 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709786 5007 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709790 5007 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709794 5007 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709799 5007 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709803 5007 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709813 5007 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709817 5007 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709821 5007 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709825 5007 flags.go:64] FLAG: --pod-cidr="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709829 5007 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709835 5007 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709839 5007 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709843 5007 flags.go:64] FLAG: --pods-per-core="0" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709847 5007 flags.go:64] FLAG: --port="10250" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709851 5007 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709856 5007 flags.go:64] FLAG: --provider-id="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709860 5007 flags.go:64] FLAG: --qos-reserved="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709864 5007 flags.go:64] FLAG: --read-only-port="10255" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709868 5007 flags.go:64] FLAG: --register-node="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709872 5007 flags.go:64] FLAG: --register-schedulable="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709876 5007 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709884 5007 flags.go:64] FLAG: --registry-burst="10" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709888 5007 flags.go:64] FLAG: --registry-qps="5" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709892 5007 flags.go:64] FLAG: --reserved-cpus="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709896 5007 flags.go:64] FLAG: --reserved-memory="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709901 5007 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709905 5007 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709909 5007 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709913 5007 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709917 5007 flags.go:64] FLAG: --runonce="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709921 5007 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709925 5007 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709930 5007 flags.go:64] FLAG: --seccomp-default="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709934 5007 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709941 5007 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709945 5007 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709949 5007 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709953 5007 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709958 5007 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709962 5007 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709966 5007 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709970 5007 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709974 5007 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709978 5007 flags.go:64] FLAG: --system-cgroups="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709982 5007 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709990 5007 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709993 5007 flags.go:64] FLAG: --tls-cert-file="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.709998 5007 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710003 5007 flags.go:64] FLAG: --tls-min-version="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710007 5007 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710011 5007 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710015 5007 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710020 5007 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710024 5007 flags.go:64] FLAG: --v="2" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710030 5007 flags.go:64] FLAG: --version="false" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710036 5007 flags.go:64] FLAG: --vmodule="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710040 5007 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.710045 5007 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710182 5007 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710187 5007 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710193 5007 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710197 5007 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710202 5007 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710206 5007 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710210 5007 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710214 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710222 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710225 5007 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710230 5007 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710234 5007 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710238 5007 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710241 5007 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710245 5007 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710249 5007 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710253 5007 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710257 5007 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710262 5007 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710266 5007 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710269 5007 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710273 5007 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710277 5007 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710282 5007 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710286 5007 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710289 5007 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710293 5007 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710296 5007 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710300 5007 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710303 5007 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710307 5007 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710310 5007 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710313 5007 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710317 5007 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710320 5007 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710324 5007 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710328 5007 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710331 5007 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710335 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710338 5007 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710344 5007 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710347 5007 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710352 5007 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710356 5007 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710359 5007 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710362 5007 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710366 5007 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710369 5007 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710374 5007 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710378 5007 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710382 5007 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710387 5007 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710391 5007 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710395 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710400 5007 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710404 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710408 5007 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710414 5007 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710419 5007 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710447 5007 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710451 5007 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710454 5007 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710458 5007 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710461 5007 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710465 5007 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710469 5007 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710473 5007 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710477 5007 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710482 5007 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710486 5007 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.710491 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.711272 5007 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.721212 5007 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.721268 5007 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721411 5007 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721426 5007 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721459 5007 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721469 5007 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721478 5007 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721487 5007 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721495 5007 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721504 5007 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721514 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721522 5007 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721530 5007 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721538 5007 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721546 5007 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721554 5007 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721562 5007 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721569 5007 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721577 5007 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721585 5007 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721594 5007 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721602 5007 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721611 5007 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721618 5007 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721629 5007 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721639 5007 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721649 5007 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721657 5007 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721664 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721672 5007 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721680 5007 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721688 5007 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721695 5007 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721704 5007 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721712 5007 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721720 5007 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721728 5007 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721736 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721743 5007 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721751 5007 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721762 5007 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721774 5007 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721783 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721792 5007 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721800 5007 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721810 5007 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721819 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721827 5007 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721835 5007 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721843 5007 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721854 5007 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721864 5007 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721873 5007 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721882 5007 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721892 5007 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721900 5007 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721908 5007 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721917 5007 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721924 5007 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721932 5007 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721940 5007 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721948 5007 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721956 5007 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721964 5007 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721972 5007 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721981 5007 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.721990 5007 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722004 5007 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722017 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722027 5007 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722038 5007 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722049 5007 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722057 5007 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.722071 5007 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722316 5007 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722332 5007 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722341 5007 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722351 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722359 5007 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722369 5007 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722377 5007 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722385 5007 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722393 5007 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722402 5007 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722410 5007 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722422 5007 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722458 5007 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722467 5007 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722476 5007 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722485 5007 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722495 5007 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722503 5007 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722511 5007 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722519 5007 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722527 5007 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722535 5007 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722543 5007 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722550 5007 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722559 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722567 5007 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722575 5007 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722582 5007 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722590 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722598 5007 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722606 5007 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722614 5007 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722622 5007 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722630 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722637 5007 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722645 5007 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722653 5007 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722661 5007 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722669 5007 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722677 5007 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722685 5007 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722693 5007 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722700 5007 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722708 5007 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722715 5007 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722723 5007 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722732 5007 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722739 5007 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722747 5007 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722755 5007 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722764 5007 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722771 5007 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722780 5007 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722787 5007 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722795 5007 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722803 5007 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722812 5007 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722820 5007 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722828 5007 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722836 5007 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722844 5007 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722855 5007 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722864 5007 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722872 5007 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722880 5007 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722888 5007 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722899 5007 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722909 5007 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722918 5007 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722927 5007 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.722935 5007 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.722948 5007 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.724277 5007 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.730237 5007 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.730391 5007 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.732270 5007 server.go:997] "Starting client certificate rotation" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.732319 5007 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.732563 5007 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 18:40:01.115305089 +0000 UTC Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.732641 5007 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.756723 5007 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.758228 5007 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.759754 5007 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.771319 5007 log.go:25] "Validated CRI v1 runtime API" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.805758 5007 log.go:25] "Validated CRI v1 image API" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.807892 5007 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.813313 5007 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-19-33-15-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.813352 5007 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.830866 5007 manager.go:217] Machine: {Timestamp:2026-02-18 19:38:35.82788346 +0000 UTC m=+0.610003004 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7a8712a4-988b-42cb-9846-bbae317fd720 BootID:1f6dd9a0-63c4-4274-b1e6-2404360e9b80 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:bd:e6:7a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:bd:e6:7a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0a:d5:77 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d1:99:94 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a1:67:6b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ab:ce:b8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:38:55:f8:d1:b6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3e:32:44:46:2c:6f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.831075 5007 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.831190 5007 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.832631 5007 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.832870 5007 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.832909 5007 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.833197 5007 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.833210 5007 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.833808 5007 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.833860 5007 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.834500 5007 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.834622 5007 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.840064 5007 kubelet.go:418] "Attempting to sync node with API server" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.840097 5007 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.840135 5007 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.840153 5007 kubelet.go:324] "Adding apiserver pod source" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.840167 5007 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.843775 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.843876 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.843788 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.844265 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.844382 5007 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.845954 5007 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.849808 5007 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.851692 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.851873 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852010 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852130 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852249 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852356 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852505 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852642 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852757 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852874 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.852987 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.853095 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.854535 5007 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.855150 5007 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.856686 5007 server.go:1280] "Started kubelet" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.857685 5007 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.857685 5007 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.858229 5007 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.858292 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.858331 5007 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.858474 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:19:08.701929873 +0000 UTC Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.858651 5007 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.858671 5007 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 19:38:35 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.858785 5007 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.859039 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.859344 5007 server.go:460] "Adding debug handlers to kubelet server" Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.859368 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.859421 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.860196 5007 factory.go:55] Registering systemd factory Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.860221 5007 factory.go:221] Registration of the systemd container factory successfully Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.865461 5007 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.865813 5007 factory.go:153] Registering CRI-O factory Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.865857 5007 factory.go:221] Registration of the crio container factory successfully Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.865987 5007 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.866023 5007 factory.go:103] Registering Raw factory Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.866045 5007 manager.go:1196] Started watching for new ooms in manager Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.866887 5007 manager.go:319] Starting recovery of all containers Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.867135 5007 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18956e7dfe474890 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:38:35.856210064 +0000 UTC m=+0.638329628,LastTimestamp:2026-02-18 19:38:35.856210064 +0000 UTC m=+0.638329628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.876450 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.876504 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.878889 5007 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.878930 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.878946 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.878960 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.878976 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.878989 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879002 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879018 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879031 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879046 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879059 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879074 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879090 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879102 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879116 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879131 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879144 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879158 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879171 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879184 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879220 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879234 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879246 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879294 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879308 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879327 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879342 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879356 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879369 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879384 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879399 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879423 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879455 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879468 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879480 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879495 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879508 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879520 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879533 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879546 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879557 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879570 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879582 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879594 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879608 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879621 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879634 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879646 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879658 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879671 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879683 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879700 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879714 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879726 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879740 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879755 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879768 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879784 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879801 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879821 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879838 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879855 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879870 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879883 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879898 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879910 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879923 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879937 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879951 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879963 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879976 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.879989 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880004 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880017 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880030 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880043 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880055 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880068 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880081 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880097 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880109 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880128 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880142 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880154 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880167 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880181 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880193 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880205 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880218 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880233 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880245 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880257 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880270 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880282 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880298 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880309 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880322 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880333 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880347 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880362 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880374 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880386 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880400 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880418 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880459 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880476 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880490 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880504 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880517 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880531 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880544 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880558 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880570 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880587 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880600 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880612 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880626 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880639 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880651 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880663 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880675 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880687 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880700 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880711 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880723 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880736 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880750 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880761 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880774 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880786 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880799 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880811 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880822 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880835 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880847 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880860 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880873 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880885 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880899 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880911 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880924 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880937 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880949 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880962 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880973 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.880987 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881002 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881015 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881027 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881039 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881051 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881064 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881076 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881088 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881101 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881112 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881124 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881136 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881149 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881161 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881172 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881184 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881197 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881210 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881224 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881236 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881248 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881260 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881272 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881285 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881298 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881311 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881323 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881335 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881353 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881367 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881382 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881398 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881410 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881422 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881501 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881515 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881527 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881540 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881553 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881566 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881579 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881592 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881604 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881617 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881631 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881644 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881656 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881670 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881682 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881695 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881707 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881720 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881733 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881747 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881759 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881771 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881783 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881796 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881810 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881822 5007 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881834 5007 reconstruct.go:97] "Volume reconstruction finished" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.881843 5007 reconciler.go:26] "Reconciler: start to sync state" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.892116 5007 manager.go:324] Recovery completed Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.902067 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.903526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.903572 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.903587 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.904263 5007 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.904295 5007 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.904317 5007 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.906532 5007 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.908297 5007 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.908331 5007 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.908367 5007 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.908416 5007 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 19:38:35 crc kubenswrapper[5007]: W0218 19:38:35.911697 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.911903 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.927212 5007 policy_none.go:49] "None policy: Start" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.928162 5007 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.928190 5007 state_mem.go:35] "Initializing new in-memory state store" Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.965555 5007 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.980618 5007 manager.go:334] "Starting Device Plugin manager" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.980668 5007 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.980682 5007 server.go:79] "Starting device plugin registration server" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.981083 5007 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.981105 5007 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.981272 5007 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.981343 5007 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 19:38:35 crc kubenswrapper[5007]: I0218 19:38:35.981354 5007 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 19:38:35 crc kubenswrapper[5007]: E0218 19:38:35.993282 5007 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.008565 5007 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.008720 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.009828 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.009884 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.009897 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.010064 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.010329 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.010367 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.010992 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.011015 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.011028 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.011056 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.011087 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.011103 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.011276 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.011415 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.011466 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012056 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012083 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012093 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012161 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012178 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012186 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012207 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012625 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012661 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012821 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012840 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012848 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.012966 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013114 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013173 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013518 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013536 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013545 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013601 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013618 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013627 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013696 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.013718 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.014463 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.014483 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.014491 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.014740 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.014760 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.014768 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: E0218 19:38:36.059538 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.081231 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.082736 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.082778 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.082790 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.082820 5007 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:38:36 crc kubenswrapper[5007]: E0218 19:38:36.083251 5007 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084500 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084591 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084637 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084670 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084699 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084734 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084766 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084799 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084837 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084867 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084924 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084956 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.084981 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.085006 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.085035 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186090 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186174 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186212 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186249 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186279 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186307 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186346 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186345 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186375 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186419 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186514 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186501 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186563 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186593 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186622 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186630 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186636 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186650 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186695 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186722 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186739 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186774 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186787 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186717 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186761 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186851 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186849 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186866 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186880 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.186898 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.284183 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.285875 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.285921 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.285938 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.285970 5007 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:38:36 crc kubenswrapper[5007]: E0218 19:38:36.286586 5007 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.344280 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.361715 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.377346 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.384515 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.388768 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:36 crc kubenswrapper[5007]: W0218 19:38:36.449746 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-82c286bd7fa3238ded0150dbcd33c4957d8da5997760931f832a6a78f6244657 WatchSource:0}: Error finding container 82c286bd7fa3238ded0150dbcd33c4957d8da5997760931f832a6a78f6244657: Status 404 returned error can't find the container with id 82c286bd7fa3238ded0150dbcd33c4957d8da5997760931f832a6a78f6244657 Feb 18 19:38:36 crc kubenswrapper[5007]: W0218 19:38:36.452373 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-7a6e340742eada0f8beccbe90fb0bd8f46f221a9e81772735d005654e7e2d018 WatchSource:0}: Error finding container 7a6e340742eada0f8beccbe90fb0bd8f46f221a9e81772735d005654e7e2d018: Status 404 returned error can't find the container with id 7a6e340742eada0f8beccbe90fb0bd8f46f221a9e81772735d005654e7e2d018 Feb 18 19:38:36 crc kubenswrapper[5007]: W0218 19:38:36.455117 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-be8a192cf85abe058159e9150ea4025fb57e79ad358180a978f1a563a850ea48 WatchSource:0}: Error finding container be8a192cf85abe058159e9150ea4025fb57e79ad358180a978f1a563a850ea48: Status 404 returned error can't find the container with id be8a192cf85abe058159e9150ea4025fb57e79ad358180a978f1a563a850ea48 Feb 18 19:38:36 crc kubenswrapper[5007]: W0218 19:38:36.458731 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-99fec72777c2397aa94e0b29867e5f1acb31c39d30ed8704977861564f9641b7 WatchSource:0}: Error finding container 99fec72777c2397aa94e0b29867e5f1acb31c39d30ed8704977861564f9641b7: Status 404 returned error can't find the container with id 99fec72777c2397aa94e0b29867e5f1acb31c39d30ed8704977861564f9641b7 Feb 18 19:38:36 crc kubenswrapper[5007]: E0218 19:38:36.460744 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Feb 18 19:38:36 crc kubenswrapper[5007]: W0218 19:38:36.460934 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2b6ba2f0e3e182968020ff22b550d4ef12ac2a6e3b1c03bff22da0be9a829596 WatchSource:0}: Error finding container 2b6ba2f0e3e182968020ff22b550d4ef12ac2a6e3b1c03bff22da0be9a829596: Status 404 returned error can't find the container with id 2b6ba2f0e3e182968020ff22b550d4ef12ac2a6e3b1c03bff22da0be9a829596 Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.687499 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.691026 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.691576 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.691597 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.691626 5007 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:38:36 crc kubenswrapper[5007]: E0218 19:38:36.692063 5007 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.856267 5007 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:36 crc kubenswrapper[5007]: W0218 19:38:36.856513 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:36 crc kubenswrapper[5007]: E0218 19:38:36.856599 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.858847 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:54:53.764884269 +0000 UTC Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.913044 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"be8a192cf85abe058159e9150ea4025fb57e79ad358180a978f1a563a850ea48"} Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.914420 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2b6ba2f0e3e182968020ff22b550d4ef12ac2a6e3b1c03bff22da0be9a829596"} Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.915176 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"82c286bd7fa3238ded0150dbcd33c4957d8da5997760931f832a6a78f6244657"} Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.916012 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"99fec72777c2397aa94e0b29867e5f1acb31c39d30ed8704977861564f9641b7"} Feb 18 19:38:36 crc kubenswrapper[5007]: I0218 19:38:36.916849 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a6e340742eada0f8beccbe90fb0bd8f46f221a9e81772735d005654e7e2d018"} Feb 18 19:38:37 crc kubenswrapper[5007]: W0218 19:38:37.204882 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:37 crc kubenswrapper[5007]: E0218 19:38:37.205192 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:37 crc kubenswrapper[5007]: W0218 19:38:37.256366 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:37 crc kubenswrapper[5007]: E0218 19:38:37.256468 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:37 crc kubenswrapper[5007]: E0218 19:38:37.261403 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Feb 18 19:38:37 crc kubenswrapper[5007]: W0218 19:38:37.405477 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:37 crc kubenswrapper[5007]: E0218 19:38:37.405594 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.492419 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.495540 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.495610 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.495625 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.495656 5007 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:38:37 crc kubenswrapper[5007]: E0218 19:38:37.496550 5007 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.856737 5007 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.859916 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:33:27.115694417 +0000 UTC Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.916344 5007 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:38:37 crc kubenswrapper[5007]: E0218 19:38:37.918115 5007 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.923304 5007 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208" exitCode=0 Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.923638 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.923752 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208"} Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.925345 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.925408 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.925466 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.927915 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.930147 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.930196 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.930210 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.932077 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76"} Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.932158 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4"} Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.932189 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da"} Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.932227 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873"} Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.932186 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.934217 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.934257 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.934275 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.935875 5007 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995" exitCode=0 Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.936029 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.936025 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995"} Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.938420 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.938515 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.938535 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.939914 5007 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b92a6166afe7ab601f6bb695d7920534284da56251be4f957ac2fac7f9cea5e0" exitCode=0 Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.940062 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.940064 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b92a6166afe7ab601f6bb695d7920534284da56251be4f957ac2fac7f9cea5e0"} Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.941706 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.941756 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.941783 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.943333 5007 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b5ee652c6c2ac4995955ec4e109ccf6611f3599aa21b82f218621b87d1ca8f6d" exitCode=0 Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.943393 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b5ee652c6c2ac4995955ec4e109ccf6611f3599aa21b82f218621b87d1ca8f6d"} Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.943533 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.944820 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.944879 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.944900 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.958079 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.958251 5007 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 19:38:37 crc kubenswrapper[5007]: I0218 19:38:37.958363 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 19:38:38 crc kubenswrapper[5007]: E0218 19:38:38.641780 5007 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18956e7dfe474890 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:38:35.856210064 +0000 UTC m=+0.638329628,LastTimestamp:2026-02-18 19:38:35.856210064 +0000 UTC m=+0.638329628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.856541 5007 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.860782 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:05:30.458793777 +0000 UTC Feb 18 19:38:38 crc kubenswrapper[5007]: E0218 19:38:38.862531 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.948051 5007 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="faf48e19439c1d1aada9e59d1aae09a852ee47a1c0441f7c7b6d7efc2b778ee4" exitCode=0 Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.948151 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"faf48e19439c1d1aada9e59d1aae09a852ee47a1c0441f7c7b6d7efc2b778ee4"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.948358 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.949601 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.949639 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.949651 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.951488 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.951548 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.951565 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.951580 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.955631 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.955707 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.955711 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.955839 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.957258 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.957288 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.957301 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.959384 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.959780 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.959742 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3528455c2d725202bc82b2008d3710621b5d75089db9b95f6143b5e5e8d0a6a9"} Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.960289 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.960345 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.960403 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.960958 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.961003 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:38 crc kubenswrapper[5007]: I0218 19:38:38.961016 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:39 crc kubenswrapper[5007]: W0218 19:38:39.062585 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:39 crc kubenswrapper[5007]: E0218 19:38:39.062705 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.097092 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.098561 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.098625 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.098640 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.098676 5007 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:38:39 crc kubenswrapper[5007]: E0218 19:38:39.099306 5007 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 18 19:38:39 crc kubenswrapper[5007]: W0218 19:38:39.476071 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:39 crc kubenswrapper[5007]: E0218 19:38:39.476232 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:39 crc kubenswrapper[5007]: W0218 19:38:39.488402 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 18 19:38:39 crc kubenswrapper[5007]: E0218 19:38:39.488523 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.861651 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:04:38.489448406 +0000 UTC Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.968014 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c"} Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.968211 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.970069 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.970140 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.970168 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.976342 5007 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fc279ce0123a15bfe23d53f4347f97eab83bf3ed95255058de986f563f59df4f" exitCode=0 Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.976570 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.976658 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.976665 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fc279ce0123a15bfe23d53f4347f97eab83bf3ed95255058de986f563f59df4f"} Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.976647 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.976873 5007 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.976957 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.978649 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.978714 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.978734 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.978796 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.978831 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.978851 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.979812 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.979854 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.979827 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.979873 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.979895 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:39 crc kubenswrapper[5007]: I0218 19:38:39.979925 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.862026 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:27:15.572487806 +0000 UTC Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.983116 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c4d4b9636254fef6d3318c1b2626e2f46ca75e429e6b0bb9fd63c13963385caf"} Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.983197 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"690c4bc315069542149039051e4ee0486ec89b7dbfb3ab29b2739f6c118d035a"} Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.983218 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fa8deaf43abde2ecb66521a6b1862ca0d203dbdd3722d60f4ad9e4cfa0aba238"} Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.983254 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.983391 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.984158 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.984230 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:40 crc kubenswrapper[5007]: I0218 19:38:40.984257 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.166879 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.167107 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.168572 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.168624 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.168639 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.389934 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.863136 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:18:01.320645354 +0000 UTC Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.991682 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"259aaa2e6ef41caba847b866bdd261f79147676b5e431c64d16ea5ffbc92b812"} Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.991733 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.991745 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f280acd1705235aea53dcc8f440161ae966114add256c4108f737c3756953b9a"} Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.991877 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.991902 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.993898 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.994008 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.994038 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.995390 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.995490 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.995518 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.999358 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.999588 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:41 crc kubenswrapper[5007]: I0218 19:38:41.999729 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.164971 5007 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.299966 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.302035 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.302250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.302406 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.302661 5007 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.411292 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.863713 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:36:36.461403794 +0000 UTC Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.994208 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.995747 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.995804 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:42 crc kubenswrapper[5007]: I0218 19:38:42.995827 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.179064 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.179251 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.180488 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.180524 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.180537 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.864857 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:22:07.738215436 +0000 UTC Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.996814 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.998466 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.998541 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:43 crc kubenswrapper[5007]: I0218 19:38:43.998568 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.167288 5007 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.167390 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.534020 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.534195 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.535636 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.535711 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.535731 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.751515 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.751766 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.753367 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.753399 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.753443 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:44 crc kubenswrapper[5007]: I0218 19:38:44.865961 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:28:29.967440169 +0000 UTC Feb 18 19:38:45 crc kubenswrapper[5007]: I0218 19:38:45.866470 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:24:03.56661027 +0000 UTC Feb 18 19:38:45 crc kubenswrapper[5007]: E0218 19:38:45.993607 5007 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:38:46 crc kubenswrapper[5007]: I0218 19:38:46.073043 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:38:46 crc kubenswrapper[5007]: I0218 19:38:46.073337 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:46 crc kubenswrapper[5007]: I0218 19:38:46.074872 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:46 crc kubenswrapper[5007]: I0218 19:38:46.074947 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:46 crc kubenswrapper[5007]: I0218 19:38:46.074973 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:46 crc kubenswrapper[5007]: I0218 19:38:46.866901 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:36:52.952671929 +0000 UTC Feb 18 19:38:47 crc kubenswrapper[5007]: I0218 19:38:47.867392 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:49:43.604127272 +0000 UTC Feb 18 19:38:47 crc kubenswrapper[5007]: I0218 19:38:47.965392 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:47 crc kubenswrapper[5007]: I0218 19:38:47.965615 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:47 crc kubenswrapper[5007]: I0218 19:38:47.967294 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:47 crc kubenswrapper[5007]: I0218 19:38:47.967361 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:47 crc kubenswrapper[5007]: I0218 19:38:47.967382 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:47 crc kubenswrapper[5007]: I0218 19:38:47.971181 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:48 crc kubenswrapper[5007]: I0218 19:38:48.009118 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:48 crc kubenswrapper[5007]: I0218 19:38:48.010819 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:48 crc kubenswrapper[5007]: I0218 19:38:48.010881 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:48 crc kubenswrapper[5007]: I0218 19:38:48.010895 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:48 crc kubenswrapper[5007]: I0218 19:38:48.869362 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:25:55.1578103 +0000 UTC Feb 18 19:38:49 crc kubenswrapper[5007]: I0218 19:38:49.857269 5007 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:38:49 crc kubenswrapper[5007]: I0218 19:38:49.869825 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:09:18.267821879 +0000 UTC Feb 18 19:38:49 crc kubenswrapper[5007]: W0218 19:38:49.876075 5007 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:38:49 crc kubenswrapper[5007]: I0218 19:38:49.876160 5007 trace.go:236] Trace[204479871]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:38:39.874) (total time: 10002ms): Feb 18 19:38:49 crc kubenswrapper[5007]: Trace[204479871]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:38:49.876) Feb 18 19:38:49 crc kubenswrapper[5007]: Trace[204479871]: [10.002054025s] [10.002054025s] END Feb 18 19:38:49 crc kubenswrapper[5007]: E0218 19:38:49.876183 5007 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 19:38:49 crc kubenswrapper[5007]: I0218 19:38:49.895762 5007 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60358->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 19:38:49 crc kubenswrapper[5007]: I0218 19:38:49.895809 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60358->192.168.126.11:17697: read: connection reset by peer" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.017930 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.020184 5007 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c" exitCode=255 Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.020249 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c"} Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.020521 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.021507 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.021550 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.021565 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.022252 5007 scope.go:117] "RemoveContainer" containerID="c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.870912 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:33:49.847422864 +0000 UTC Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.945693 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.945830 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.946686 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.946701 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.946709 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:50 crc kubenswrapper[5007]: I0218 19:38:50.988873 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.023460 5007 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.023528 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.025521 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.026884 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.027498 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123"} Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.027916 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.033223 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.033337 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.033348 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.033415 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.033367 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.033500 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.037096 5007 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.037175 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.070091 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 19:38:51 crc kubenswrapper[5007]: I0218 19:38:51.871604 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:04:26.764583547 +0000 UTC Feb 18 19:38:52 crc kubenswrapper[5007]: I0218 19:38:52.029653 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:38:52 crc kubenswrapper[5007]: I0218 19:38:52.031714 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:38:52 crc kubenswrapper[5007]: I0218 19:38:52.031759 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:38:52 crc kubenswrapper[5007]: I0218 19:38:52.031771 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:38:52 crc kubenswrapper[5007]: I0218 19:38:52.872060 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:46:09.145882054 +0000 UTC Feb 18 19:38:53 crc kubenswrapper[5007]: I0218 19:38:53.873056 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:33:01.778609587 +0000 UTC Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.167868 5007 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.167992 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.441207 5007 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.757944 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.758456 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.764636 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.856470 5007 apiserver.go:52] "Watching apiserver" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.859026 5007 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.859299 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.859529 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.861830 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:54 crc kubenswrapper[5007]: E0218 19:38:54.861904 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.861973 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.862042 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:38:54 crc kubenswrapper[5007]: E0218 19:38:54.862073 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.862126 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:38:54 crc kubenswrapper[5007]: E0218 19:38:54.862153 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.862209 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.865534 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.865861 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.866152 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.866353 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.866694 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.866901 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.867143 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.867352 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.867568 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.873200 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:35:44.824017635 +0000 UTC Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.903881 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.933477 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.953639 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.960046 5007 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.964122 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:54 crc kubenswrapper[5007]: I0218 19:38:54.976828 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.004565 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.022839 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: E0218 19:38:55.046592 5007 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.047133 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.065665 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.079767 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.099443 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.119710 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.132561 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.144166 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.873946 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:32:11.509598326 +0000 UTC Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.928085 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.949114 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.966843 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:55 crc kubenswrapper[5007]: I0218 19:38:55.993471 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.009033 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.009375 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.015132 5007 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.017503 5007 trace.go:236] Trace[1962943368]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:38:44.036) (total time: 11980ms): Feb 18 19:38:56 crc kubenswrapper[5007]: Trace[1962943368]: ---"Objects listed" error: 11980ms (19:38:56.017) Feb 18 19:38:56 crc kubenswrapper[5007]: Trace[1962943368]: [11.980622104s] [11.980622104s] END Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.017560 5007 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.019893 5007 trace.go:236] Trace[1693607893]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:38:44.064) (total time: 11954ms): Feb 18 19:38:56 crc kubenswrapper[5007]: Trace[1693607893]: ---"Objects listed" error: 11954ms (19:38:56.019) Feb 18 19:38:56 crc kubenswrapper[5007]: Trace[1693607893]: [11.954880926s] [11.954880926s] END Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.019920 5007 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.021796 5007 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.025205 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.028568 5007 trace.go:236] Trace[1590566674]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:38:44.241) (total time: 11786ms): Feb 18 19:38:56 crc kubenswrapper[5007]: Trace[1590566674]: ---"Objects listed" error: 11786ms (19:38:56.028) Feb 18 19:38:56 crc kubenswrapper[5007]: Trace[1590566674]: [11.786412662s] [11.786412662s] END Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.028639 5007 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.036783 5007 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.043557 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.053069 5007 csr.go:261] certificate signing request csr-knh2b is approved, waiting to be issued Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.064166 5007 csr.go:257] certificate signing request csr-knh2b is issued Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.116231 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.116547 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.116671 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.116782 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.116932 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117043 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117193 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.116586 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117269 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117303 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117405 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117490 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117504 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117558 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117599 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117637 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117680 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117786 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117808 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117864 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117906 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117940 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117975 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.117986 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118006 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.118042 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:38:56.618020455 +0000 UTC m=+21.400139979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118032 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118077 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118107 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118126 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118142 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118165 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118184 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118204 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118194 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118266 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118286 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118306 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118325 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118340 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118355 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118370 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118385 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118402 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118403 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118418 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118457 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118476 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118497 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118516 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118497 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118536 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118556 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118577 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118596 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118616 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118634 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118656 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118674 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118691 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118708 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118726 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118744 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118762 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118781 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118800 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118818 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118839 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118873 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118890 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118906 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118922 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118938 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118958 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118974 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118994 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119010 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119029 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119048 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119066 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119081 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119101 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119148 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119162 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119178 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119194 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119213 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119229 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119248 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119265 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119286 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119308 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119326 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119343 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119360 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119375 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119390 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119406 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119438 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119454 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119470 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119488 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119506 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119524 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119540 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119556 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119573 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119591 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119608 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119624 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119640 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119657 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119675 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119719 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119738 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119754 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119772 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119789 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119805 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119821 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119839 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119857 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119874 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119889 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119906 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119923 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119940 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119960 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119979 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119995 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120015 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120033 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120050 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120086 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120107 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120125 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120141 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120156 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120171 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120190 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120246 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120265 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120282 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120298 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120314 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120331 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120347 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120365 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120380 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120398 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120414 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120454 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120472 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120487 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120505 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120521 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120538 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120557 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120576 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120596 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120613 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120628 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120644 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120660 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120677 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120694 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120709 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120726 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120743 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120759 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120775 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120792 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120807 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120824 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120840 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120859 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120876 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120893 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120911 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120927 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120943 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120960 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120976 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120994 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121010 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121028 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121047 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121064 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121080 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121096 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121115 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121131 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121148 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121164 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121182 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121197 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121213 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121230 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121248 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121267 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121286 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121303 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121320 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121337 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121355 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121383 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121415 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121468 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121489 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121507 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121525 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121546 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121565 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121583 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121599 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121619 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121637 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121655 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121673 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121691 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121707 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121743 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121756 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121767 5007 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121777 5007 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121786 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121796 5007 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121807 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121816 5007 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121826 5007 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121836 5007 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.121846 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118550 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118550 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118677 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118709 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118722 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118916 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118928 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.118943 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119007 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119231 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119238 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119267 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119265 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119337 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119462 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119525 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119567 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119707 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119714 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119889 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119890 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.119951 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120010 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120176 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120201 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.120712 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.121980 5007 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.129790 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:56.629749887 +0000 UTC m=+21.411869451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.122741 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.122792 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.122815 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.123094 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.123177 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.123496 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.123535 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.123700 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.123698 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.123755 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.123994 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.124070 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.124212 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.124297 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.124323 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.124337 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.124647 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.124816 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.124836 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.125091 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.125861 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.125912 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.126592 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.126778 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.127276 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.127285 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.127529 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.127643 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.127821 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.127898 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.128356 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.128367 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.128960 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.129077 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.129110 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.130730 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.131274 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.132300 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.132469 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.132532 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.132569 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.132652 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.132908 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.133117 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.133356 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.133643 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.133651 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.133811 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.133828 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.133930 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.134035 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.134262 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.134281 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.134981 5007 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.135038 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:56.635020911 +0000 UTC m=+21.417140435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.143602 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.144097 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.144420 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.145458 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.132471 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.145986 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.146033 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.146158 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.146343 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.146397 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.146497 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.146891 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.147505 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.147561 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.147747 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.152858 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.153521 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.153939 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.154021 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.154340 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.154601 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.155453 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.155584 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.145079 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.157763 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.158865 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.166497 5007 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.173764 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.174985 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.175005 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.175463 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.175792 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.177185 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.177321 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.177651 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.177833 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.177919 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.178165 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.178229 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.178502 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.178809 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.178821 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.179865 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.180902 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.185597 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.186002 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.186096 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.186379 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.186380 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.186726 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.188551 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.188553 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.189110 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.189233 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.189303 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.189528 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.189761 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.189987 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.190105 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.190301 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.190464 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.190748 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.190834 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.190820 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.190943 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.191094 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.191491 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192051 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192078 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192143 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192230 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192406 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192466 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192599 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192666 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192784 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.192826 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.193100 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.193288 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.193539 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.193589 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.193672 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.193976 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.194197 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.194313 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.194410 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.194707 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.194757 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.194773 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.195208 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.195346 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.195613 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.195819 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.195924 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.196343 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.196694 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.196792 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.197185 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.198001 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.201882 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.209222 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.209672 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.209851 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.211189 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.211223 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.211239 5007 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.211307 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:56.711286026 +0000 UTC m=+21.493405560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.211501 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.211578 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.213523 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.217275 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.220739 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.220791 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.220811 5007 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.220752 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.220875 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.220888 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:56.720859415 +0000 UTC m=+21.502979129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.221414 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222315 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222346 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222394 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222404 5007 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222414 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222421 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222442 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222450 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222459 5007 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222466 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222474 5007 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222483 5007 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222491 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222499 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222507 5007 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222515 5007 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222523 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222531 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222539 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222548 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222557 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222565 5007 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222573 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222581 5007 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222590 5007 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222597 5007 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222605 5007 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222613 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222622 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222629 5007 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222638 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222646 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222655 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222664 5007 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222672 5007 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222681 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222690 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222699 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222707 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222716 5007 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222724 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222734 5007 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222742 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222750 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222759 5007 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222767 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222775 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222783 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222790 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222798 5007 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222806 5007 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222815 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222823 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222830 5007 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222838 5007 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222847 5007 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222855 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222864 5007 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222871 5007 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222880 5007 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222888 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222895 5007 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222903 5007 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222910 5007 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222918 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222928 5007 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222937 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222945 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222953 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222960 5007 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222969 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222977 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222985 5007 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.222993 5007 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223001 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223008 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223015 5007 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223023 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223031 5007 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223038 5007 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223081 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223088 5007 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223098 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223108 5007 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223119 5007 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223129 5007 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223139 5007 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223150 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223160 5007 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223170 5007 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223180 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223191 5007 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223201 5007 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223212 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223223 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223233 5007 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223242 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223250 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223259 5007 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223267 5007 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223276 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223285 5007 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223293 5007 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223300 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223308 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223317 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223326 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223333 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223341 5007 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223350 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223358 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223365 5007 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223375 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223383 5007 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223393 5007 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223401 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223409 5007 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223417 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223446 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223466 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223474 5007 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223484 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223493 5007 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223501 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223509 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223517 5007 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223525 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223532 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223540 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223548 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223556 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223564 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223572 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223580 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223588 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223595 5007 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223602 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223610 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223617 5007 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223625 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223634 5007 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223642 5007 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223650 5007 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223657 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223665 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223674 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223682 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223691 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223698 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223706 5007 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223715 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223724 5007 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223732 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223740 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223748 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223756 5007 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223764 5007 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223772 5007 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223780 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223788 5007 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223796 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223804 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223812 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223821 5007 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223830 5007 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223838 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223848 5007 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223855 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223864 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223872 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223880 5007 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223888 5007 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223896 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223904 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223912 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223920 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223927 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223935 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223943 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223951 5007 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223959 5007 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.223978 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.224021 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.224072 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.232002 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.238018 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.238844 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.324986 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.325014 5007 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.325025 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.400420 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.405775 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.413607 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.627153 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.627476 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:38:57.627443697 +0000 UTC m=+22.409563221 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.727843 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.727893 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.727920 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.727944 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728060 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728073 5007 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728140 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:57.728121124 +0000 UTC m=+22.510240648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728078 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728168 5007 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728205 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:57.728195417 +0000 UTC m=+22.510314941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728219 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728254 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728267 5007 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728313 5007 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728334 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:57.72831654 +0000 UTC m=+22.510436064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.728512 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:57.728476685 +0000 UTC m=+22.510596239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.874551 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:23:00.132685083 +0000 UTC Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.909321 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.909367 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:56 crc kubenswrapper[5007]: I0218 19:38:56.909395 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.909654 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.909732 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:38:56 crc kubenswrapper[5007]: E0218 19:38:56.909802 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.044712 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686"} Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.044757 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b"} Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.044768 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aacbf14b1e8888a9ba66afb8d3eb642f39da2a7626ead7f3247da34a374c2170"} Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.045982 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"debc7e8a23a87dcee2872863d66ba1de473d77a0d4dbfdde6e91f22bd2b2232c"} Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.047507 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b"} Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.047561 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"81e95a4174372316cbb25c13c0edd4b3fa145b9ed3d71e36f56d893468a5fb4a"} Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.064802 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.065769 5007 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 19:33:56 +0000 UTC, rotation deadline is 2027-01-05 06:57:15.396073004 +0000 UTC Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.065814 5007 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7691h18m18.330262699s for next certificate rotation Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.080220 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.091944 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.114343 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.127621 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.147243 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.164768 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.188121 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.197802 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.226820 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.238406 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.253916 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.276277 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.295704 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.567909 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bgdpv"] Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.568262 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bgdpv" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.570104 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.570625 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.571532 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.580868 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.593883 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.613728 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.633162 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.641186 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.641379 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:38:59.641355627 +0000 UTC m=+24.423475151 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.641481 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5dt\" (UniqueName: \"kubernetes.io/projected/10ebe7a1-443b-44bf-ba10-9f79a766a932-kube-api-access-gq5dt\") pod \"node-resolver-bgdpv\" (UID: \"10ebe7a1-443b-44bf-ba10-9f79a766a932\") " pod="openshift-dns/node-resolver-bgdpv" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.641512 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/10ebe7a1-443b-44bf-ba10-9f79a766a932-hosts-file\") pod \"node-resolver-bgdpv\" (UID: \"10ebe7a1-443b-44bf-ba10-9f79a766a932\") " pod="openshift-dns/node-resolver-bgdpv" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.647073 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.657086 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.672349 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.684097 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.743007 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq5dt\" (UniqueName: \"kubernetes.io/projected/10ebe7a1-443b-44bf-ba10-9f79a766a932-kube-api-access-gq5dt\") pod \"node-resolver-bgdpv\" (UID: \"10ebe7a1-443b-44bf-ba10-9f79a766a932\") " pod="openshift-dns/node-resolver-bgdpv" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.743065 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/10ebe7a1-443b-44bf-ba10-9f79a766a932-hosts-file\") pod \"node-resolver-bgdpv\" (UID: \"10ebe7a1-443b-44bf-ba10-9f79a766a932\") " pod="openshift-dns/node-resolver-bgdpv" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.743097 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.743126 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.743154 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.743176 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743285 5007 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743298 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743317 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743330 5007 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.743283 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/10ebe7a1-443b-44bf-ba10-9f79a766a932-hosts-file\") pod \"node-resolver-bgdpv\" (UID: \"10ebe7a1-443b-44bf-ba10-9f79a766a932\") " pod="openshift-dns/node-resolver-bgdpv" Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743351 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:59.743332572 +0000 UTC m=+24.525452096 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743454 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743564 5007 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743508 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:59.743479216 +0000 UTC m=+24.525598750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743624 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743672 5007 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743717 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:59.743656452 +0000 UTC m=+24.525776076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:38:57 crc kubenswrapper[5007]: E0218 19:38:57.743801 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:38:59.743767595 +0000 UTC m=+24.525887159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.770221 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq5dt\" (UniqueName: \"kubernetes.io/projected/10ebe7a1-443b-44bf-ba10-9f79a766a932-kube-api-access-gq5dt\") pod \"node-resolver-bgdpv\" (UID: \"10ebe7a1-443b-44bf-ba10-9f79a766a932\") " pod="openshift-dns/node-resolver-bgdpv" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.875311 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:52:15.340877134 +0000 UTC Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.879581 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bgdpv" Feb 18 19:38:57 crc kubenswrapper[5007]: W0218 19:38:57.890310 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10ebe7a1_443b_44bf_ba10_9f79a766a932.slice/crio-cc2ccc689989300d1c059a79cbc309e41171fdac7dd1e36c19d2eaafe5d85555 WatchSource:0}: Error finding container cc2ccc689989300d1c059a79cbc309e41171fdac7dd1e36c19d2eaafe5d85555: Status 404 returned error can't find the container with id cc2ccc689989300d1c059a79cbc309e41171fdac7dd1e36c19d2eaafe5d85555 Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.913998 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.914736 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.916651 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.917553 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.919017 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.919854 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.920873 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.924276 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.925329 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.926877 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.927747 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.930512 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.931686 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.932779 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.935421 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.936077 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.937497 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.937911 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.938500 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.939581 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.940073 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.941116 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.941780 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.943492 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.943977 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.944679 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.945978 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.946848 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.947820 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.948338 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.949345 5007 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.949475 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.951326 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.952365 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.952983 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.954952 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.955832 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.957074 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.957946 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.959489 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.960006 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.961073 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.961710 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.962823 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.963327 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.964630 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.965363 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.966550 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.967081 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.968048 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.968568 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.969496 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.970089 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.970675 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.972801 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-d8p88"] Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.973303 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kjbbj"] Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.973729 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hhb7p"] Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.974082 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kjbbj" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.974164 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.975879 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.980210 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.983655 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984000 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984124 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984226 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984324 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984453 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984559 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984649 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984809 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.984888 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:38:57 crc kubenswrapper[5007]: I0218 19:38:57.985142 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.004269 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.022860 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.040688 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.045974 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046009 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-conf-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046026 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1f79950-bc88-4358-844e-ba7f87d1b564-proxy-tls\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046042 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56pg\" (UniqueName: \"kubernetes.io/projected/e1f79950-bc88-4358-844e-ba7f87d1b564-kube-api-access-d56pg\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046057 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-socket-dir-parent\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046070 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-daemon-config\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046084 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e1f79950-bc88-4358-844e-ba7f87d1b564-rootfs\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046101 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-system-cni-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046114 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-k8s-cni-cncf-io\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046127 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-netns\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046156 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-etc-kubernetes\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046169 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-os-release\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046185 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-cni-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046198 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9c2fc812-cdd8-415b-a138-b8a3e14fd396-cni-binary-copy\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046214 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-kubelet\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046228 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cni-binary-copy\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046243 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r687l\" (UniqueName: \"kubernetes.io/projected/9c2fc812-cdd8-415b-a138-b8a3e14fd396-kube-api-access-r687l\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046256 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-system-cni-dir\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046273 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1f79950-bc88-4358-844e-ba7f87d1b564-mcd-auth-proxy-config\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046288 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-os-release\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046307 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-cni-bin\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046323 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-cni-multus\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046339 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046355 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-cnibin\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046376 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cnibin\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046391 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-multus-certs\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046406 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-hostroot\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.046420 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqkh\" (UniqueName: \"kubernetes.io/projected/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-kube-api-access-6xqkh\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.050493 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bgdpv" event={"ID":"10ebe7a1-443b-44bf-ba10-9f79a766a932","Type":"ContainerStarted","Data":"cc2ccc689989300d1c059a79cbc309e41171fdac7dd1e36c19d2eaafe5d85555"} Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.101975 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.147889 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-hostroot\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.147940 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xqkh\" (UniqueName: \"kubernetes.io/projected/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-kube-api-access-6xqkh\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.147965 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.147985 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-conf-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148009 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-daemon-config\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148029 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e1f79950-bc88-4358-844e-ba7f87d1b564-rootfs\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148049 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1f79950-bc88-4358-844e-ba7f87d1b564-proxy-tls\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148068 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56pg\" (UniqueName: \"kubernetes.io/projected/e1f79950-bc88-4358-844e-ba7f87d1b564-kube-api-access-d56pg\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148089 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-socket-dir-parent\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148116 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-system-cni-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148130 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-k8s-cni-cncf-io\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148146 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-netns\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148170 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-cni-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148186 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-etc-kubernetes\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148203 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-os-release\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148218 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9c2fc812-cdd8-415b-a138-b8a3e14fd396-cni-binary-copy\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148242 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-kubelet\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148257 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cni-binary-copy\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148277 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1f79950-bc88-4358-844e-ba7f87d1b564-mcd-auth-proxy-config\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148293 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-os-release\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148312 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r687l\" (UniqueName: \"kubernetes.io/projected/9c2fc812-cdd8-415b-a138-b8a3e14fd396-kube-api-access-r687l\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148129 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-conf-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148330 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-system-cni-dir\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148399 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-etc-kubernetes\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148419 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-cni-bin\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148476 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-cni-multus\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148517 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-system-cni-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148545 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148561 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-cni-dir\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148599 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-k8s-cni-cncf-io\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148599 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-socket-dir-parent\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148626 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-netns\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148639 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-cnibin\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148292 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148657 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-cni-bin\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148606 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-cnibin\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148378 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-system-cni-dir\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148656 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e1f79950-bc88-4358-844e-ba7f87d1b564-rootfs\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148699 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cnibin\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148715 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9c2fc812-cdd8-415b-a138-b8a3e14fd396-multus-daemon-config\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148745 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cnibin\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148773 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-kubelet\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148770 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148814 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-multus-certs\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148837 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-var-lib-cni-multus\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148851 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-os-release\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148871 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-host-run-multus-certs\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.148898 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-os-release\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.149138 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9c2fc812-cdd8-415b-a138-b8a3e14fd396-hostroot\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.149143 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.149307 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1f79950-bc88-4358-844e-ba7f87d1b564-mcd-auth-proxy-config\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.149307 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-cni-binary-copy\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.149365 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9c2fc812-cdd8-415b-a138-b8a3e14fd396-cni-binary-copy\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.161058 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1f79950-bc88-4358-844e-ba7f87d1b564-proxy-tls\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.166018 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.166356 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56pg\" (UniqueName: \"kubernetes.io/projected/e1f79950-bc88-4358-844e-ba7f87d1b564-kube-api-access-d56pg\") pod \"machine-config-daemon-d8p88\" (UID: \"e1f79950-bc88-4358-844e-ba7f87d1b564\") " pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.167476 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xqkh\" (UniqueName: \"kubernetes.io/projected/8db715e4-a72b-4f74-b2e9-9d529e2b0a47-kube-api-access-6xqkh\") pod \"multus-additional-cni-plugins-hhb7p\" (UID: \"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\") " pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.168642 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r687l\" (UniqueName: \"kubernetes.io/projected/9c2fc812-cdd8-415b-a138-b8a3e14fd396-kube-api-access-r687l\") pod \"multus-kjbbj\" (UID: \"9c2fc812-cdd8-415b-a138-b8a3e14fd396\") " pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.194395 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.208764 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.223451 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.237462 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.249559 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.274461 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.288583 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kjbbj" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.296968 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.304532 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.306744 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.331604 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.351537 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.367037 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.456885 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.457002 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75flr"] Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.457780 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: W0218 19:38:58.478985 5007 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:38:58 crc kubenswrapper[5007]: W0218 19:38:58.479022 5007 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:38:58 crc kubenswrapper[5007]: W0218 19:38:58.479031 5007 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.479031 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.479046 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.479070 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:38:58 crc kubenswrapper[5007]: W0218 19:38:58.478985 5007 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.479110 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:38:58 crc kubenswrapper[5007]: W0218 19:38:58.478984 5007 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:38:58 crc kubenswrapper[5007]: W0218 19:38:58.479121 5007 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.479138 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.479161 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:38:58 crc kubenswrapper[5007]: W0218 19:38:58.478986 5007 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.479187 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.523789 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.557908 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-netd\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.557949 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.557989 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-slash\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558006 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-var-lib-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558023 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-ovn-kubernetes\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558039 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-log-socket\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558070 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89t88\" (UniqueName: \"kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558091 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-config\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558114 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558131 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-kubelet\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558151 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-netns\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558170 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-node-log\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558193 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-etc-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558210 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-bin\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558227 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-ovn\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558243 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-systemd\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558262 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558281 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-systemd-units\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558298 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.558317 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.561318 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.585835 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.604941 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.620853 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.638589 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.659668 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-slash\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.659746 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-slash\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.659878 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-var-lib-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.659932 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-log-socket\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.659960 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-var-lib-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.659970 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-ovn-kubernetes\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.659994 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-ovn-kubernetes\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660027 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89t88\" (UniqueName: \"kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660041 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-log-socket\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660062 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-config\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660130 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-kubelet\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660150 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660168 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-netns\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660179 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-kubelet\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660206 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-node-log\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660186 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-node-log\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660253 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-etc-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660262 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-netns\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660281 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-bin\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660300 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-etc-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660312 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-ovn\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660337 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-systemd-units\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660361 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-systemd\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660357 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-bin\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660385 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-systemd-units\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660364 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-ovn\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660389 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660422 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660456 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-systemd\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660466 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660491 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-openvswitch\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660497 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660543 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-netd\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660565 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.660609 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-netd\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.666842 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.680911 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.710882 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.725611 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.737750 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.749909 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.767626 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.781070 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.794051 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.811893 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.875451 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:31:18.011204207 +0000 UTC Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.909287 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.909298 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:58 crc kubenswrapper[5007]: I0218 19:38:58.909299 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.909481 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.909581 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:38:58 crc kubenswrapper[5007]: E0218 19:38:58.909662 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.054639 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kjbbj" event={"ID":"9c2fc812-cdd8-415b-a138-b8a3e14fd396","Type":"ContainerStarted","Data":"0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.054697 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kjbbj" event={"ID":"9c2fc812-cdd8-415b-a138-b8a3e14fd396","Type":"ContainerStarted","Data":"3cd02f683c658c71e36cd80eb36fc4042b8f4ae00936f51973820aba59efd632"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.055907 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bgdpv" event={"ID":"10ebe7a1-443b-44bf-ba10-9f79a766a932","Type":"ContainerStarted","Data":"a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.057487 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.059715 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerStarted","Data":"63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.059744 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerStarted","Data":"18c8708a2acd2f637f591fd4662f1d3472a06148936032e177e4a341260e0460"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.061809 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.061877 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.061900 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"5f404f81ff850a5d14e9ff27adacf8d17ac8318db6785c823b69325f4111036c"} Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.073843 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.087821 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.103658 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.121202 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.138781 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.157605 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.172022 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.182392 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.213604 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.235111 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.248919 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.267168 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.281540 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.292572 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.308650 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.322555 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.323654 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.337454 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.354227 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.365657 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.377045 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.389166 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.404760 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.415814 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.430911 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.544947 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.551122 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-config\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.635028 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-w77zw"] Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.635367 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.637247 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.637906 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.638039 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.638212 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.651867 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.660545 5007 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.660637 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib podName:de3b9770-e878-4070-978e-29931dd72c35 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:00.160621079 +0000 UTC m=+24.942740603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib") pod "ovnkube-node-75flr" (UID: "de3b9770-e878-4070-978e-29931dd72c35") : failed to sync configmap cache: timed out waiting for the condition Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.660669 5007 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.660700 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides podName:de3b9770-e878-4070-978e-29931dd72c35 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:00.160689441 +0000 UTC m=+24.942808965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides") pod "ovnkube-node-75flr" (UID: "de3b9770-e878-4070-978e-29931dd72c35") : failed to sync configmap cache: timed out waiting for the condition Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.660733 5007 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.660796 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert podName:de3b9770-e878-4070-978e-29931dd72c35 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:00.160780984 +0000 UTC m=+24.942900508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert") pod "ovnkube-node-75flr" (UID: "de3b9770-e878-4070-978e-29931dd72c35") : failed to sync secret cache: timed out waiting for the condition Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.669805 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.669929 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.670084 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:03.670068085 +0000 UTC m=+28.452187609 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.676215 5007 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.682752 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.717141 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.741111 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.754204 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.770795 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.770846 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0118dcb7-4b95-4644-8702-62d7c92f9001-serviceca\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.770863 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk298\" (UniqueName: \"kubernetes.io/projected/0118dcb7-4b95-4644-8702-62d7c92f9001-kube-api-access-wk298\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.770887 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0118dcb7-4b95-4644-8702-62d7c92f9001-host\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.770906 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.770935 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.770960 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771070 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771085 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771095 5007 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771135 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:03.771122333 +0000 UTC m=+28.553241857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771178 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771189 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771195 5007 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771214 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:03.771208325 +0000 UTC m=+28.553327839 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771284 5007 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771303 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:03.771297978 +0000 UTC m=+28.553417502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771339 5007 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:38:59 crc kubenswrapper[5007]: E0218 19:38:59.771358 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:03.77135171 +0000 UTC m=+28.553471234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.776185 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.801379 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.812393 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.828387 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.840440 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.854082 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.870186 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.871607 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0118dcb7-4b95-4644-8702-62d7c92f9001-serviceca\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.871645 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk298\" (UniqueName: \"kubernetes.io/projected/0118dcb7-4b95-4644-8702-62d7c92f9001-kube-api-access-wk298\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.871694 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0118dcb7-4b95-4644-8702-62d7c92f9001-host\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.871768 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0118dcb7-4b95-4644-8702-62d7c92f9001-host\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.872603 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0118dcb7-4b95-4644-8702-62d7c92f9001-serviceca\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.875646 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:05:12.13113906 +0000 UTC Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.886048 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.894641 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk298\" (UniqueName: \"kubernetes.io/projected/0118dcb7-4b95-4644-8702-62d7c92f9001-kube-api-access-wk298\") pod \"node-ca-w77zw\" (UID: \"0118dcb7-4b95-4644-8702-62d7c92f9001\") " pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.911242 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.947002 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-w77zw" Feb 18 19:38:59 crc kubenswrapper[5007]: I0218 19:38:59.973007 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:38:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.008310 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.020805 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:39:00 crc kubenswrapper[5007]: E0218 19:39:00.026525 5007 projected.go:194] Error preparing data for projected volume kube-api-access-89t88 for pod openshift-ovn-kubernetes/ovnkube-node-75flr: failed to sync configmap cache: timed out waiting for the condition Feb 18 19:39:00 crc kubenswrapper[5007]: E0218 19:39:00.026600 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88 podName:de3b9770-e878-4070-978e-29931dd72c35 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:00.526578276 +0000 UTC m=+25.308697800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-89t88" (UniqueName: "kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88") pod "ovnkube-node-75flr" (UID: "de3b9770-e878-4070-978e-29931dd72c35") : failed to sync configmap cache: timed out waiting for the condition Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.069960 5007 generic.go:334] "Generic (PLEG): container finished" podID="8db715e4-a72b-4f74-b2e9-9d529e2b0a47" containerID="63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3" exitCode=0 Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.070019 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerDied","Data":"63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3"} Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.071705 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-w77zw" event={"ID":"0118dcb7-4b95-4644-8702-62d7c92f9001","Type":"ContainerStarted","Data":"75bb9e45f2cc95e2378b17c18da7d339f0520f637273ea3639881bb3e9af9ccb"} Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.086665 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.100542 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.112718 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.122392 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.135200 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.151378 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.163927 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.173703 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.173778 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.173807 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.174696 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.174930 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.177946 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.193729 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.208978 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.221549 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.236937 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.251160 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.280248 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:00Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.577389 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89t88\" (UniqueName: \"kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.583315 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89t88\" (UniqueName: \"kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88\") pod \"ovnkube-node-75flr\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.598044 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:00 crc kubenswrapper[5007]: W0218 19:39:00.613393 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3b9770_e878_4070_978e_29931dd72c35.slice/crio-c3b7729002606e0003095c4dea7a88371f7bd7086109b994f79727a533762c1d WatchSource:0}: Error finding container c3b7729002606e0003095c4dea7a88371f7bd7086109b994f79727a533762c1d: Status 404 returned error can't find the container with id c3b7729002606e0003095c4dea7a88371f7bd7086109b994f79727a533762c1d Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.876323 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:07:33.926795093 +0000 UTC Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.909526 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.909551 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:00 crc kubenswrapper[5007]: I0218 19:39:00.909537 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:00 crc kubenswrapper[5007]: E0218 19:39:00.909651 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:00 crc kubenswrapper[5007]: E0218 19:39:00.909764 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:00 crc kubenswrapper[5007]: E0218 19:39:00.909850 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.076681 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerStarted","Data":"4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea"} Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.077914 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-w77zw" event={"ID":"0118dcb7-4b95-4644-8702-62d7c92f9001","Type":"ContainerStarted","Data":"c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a"} Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.079307 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065" exitCode=0 Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.079341 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.079405 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"c3b7729002606e0003095c4dea7a88371f7bd7086109b994f79727a533762c1d"} Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.095220 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.109927 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.122300 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.145543 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.162785 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.188533 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.190589 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.194803 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.201165 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.210399 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.225702 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.244510 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.259163 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.281296 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.297296 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.313967 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.330355 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.341264 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.361102 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.375388 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.393582 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.410949 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.438532 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.452543 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.468339 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.483165 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.505203 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.530826 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.547298 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.564332 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:01 crc kubenswrapper[5007]: I0218 19:39:01.877303 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:09:33.822298466 +0000 UTC Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.086205 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.086259 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.086269 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.086278 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.086286 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.086294 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.088516 5007 generic.go:334] "Generic (PLEG): container finished" podID="8db715e4-a72b-4f74-b2e9-9d529e2b0a47" containerID="4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea" exitCode=0 Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.088579 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerDied","Data":"4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.108249 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.125756 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.148126 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.160307 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.175146 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.193206 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.208920 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.224633 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.240036 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.256376 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.270024 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.282474 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.296217 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.315617 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.422405 5007 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.423914 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.423971 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.423993 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.424145 5007 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.431177 5007 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.431363 5007 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.432325 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.432380 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.432393 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.432413 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.432441 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.450560 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.454547 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.454593 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.454609 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.454630 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.454643 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.469227 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.473021 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.473093 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.473111 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.473139 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.473171 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.487613 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.492167 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.492232 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.492249 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.492273 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.492290 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.506199 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.510353 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.510387 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.510396 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.510417 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.510447 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.523267 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:02Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.523388 5007 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.525393 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.525468 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.525490 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.525520 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.525544 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.627498 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.627542 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.627552 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.627568 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.627578 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.729977 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.730051 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.730071 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.730098 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.730123 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.833673 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.833738 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.833748 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.833767 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.833785 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.878403 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:52:39.592513661 +0000 UTC Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.909209 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.909254 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.909214 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.909354 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.909552 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:02 crc kubenswrapper[5007]: E0218 19:39:02.909700 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.936194 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.936275 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.936294 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.936325 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:02 crc kubenswrapper[5007]: I0218 19:39:02.936345 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:02Z","lastTransitionTime":"2026-02-18T19:39:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.038531 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.038590 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.038603 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.038624 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.038638 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.094362 5007 generic.go:334] "Generic (PLEG): container finished" podID="8db715e4-a72b-4f74-b2e9-9d529e2b0a47" containerID="7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad" exitCode=0 Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.094486 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerDied","Data":"7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.115132 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.135854 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.141338 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.141453 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.141476 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.141511 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.141536 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.153932 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.172390 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.184179 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.200100 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.212454 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.234514 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.244228 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.244273 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.244284 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.244299 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.244308 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.246319 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.259393 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.272125 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.286001 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.298665 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.310623 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.346966 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.347001 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.347010 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.347027 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.347041 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.449546 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.449595 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.449605 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.449622 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.449631 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.551360 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.551390 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.551398 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.551413 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.551422 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.653577 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.653861 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.653956 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.654047 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.654129 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.712487 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.712675 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:11.712649846 +0000 UTC m=+36.494769370 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.756314 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.756355 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.756367 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.756382 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.756394 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.814147 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.814211 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.814239 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.814265 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814361 5007 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814398 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814488 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814510 5007 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814411 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:11.814397184 +0000 UTC m=+36.596516708 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814586 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:11.814564469 +0000 UTC m=+36.596684083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814626 5007 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814637 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814759 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814771 5007 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814740 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:11.814718734 +0000 UTC m=+36.596838348 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:39:03 crc kubenswrapper[5007]: E0218 19:39:03.814808 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:11.814800726 +0000 UTC m=+36.596920250 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.858643 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.858676 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.858686 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.858698 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.858708 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.878766 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:32:42.675133066 +0000 UTC Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.960811 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.960841 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.960849 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.960861 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:03 crc kubenswrapper[5007]: I0218 19:39:03.960871 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:03Z","lastTransitionTime":"2026-02-18T19:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.063230 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.063268 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.063278 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.063292 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.063303 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.099234 5007 generic.go:334] "Generic (PLEG): container finished" podID="8db715e4-a72b-4f74-b2e9-9d529e2b0a47" containerID="165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d" exitCode=0 Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.099285 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerDied","Data":"165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.114125 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.126526 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.139492 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.151995 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.164173 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.165715 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.165749 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.165762 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.165776 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.165788 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.178217 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.195399 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.211158 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.224358 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.240564 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.258344 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.268304 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.268338 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.268347 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.268362 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.268374 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.272242 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.297114 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.309722 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.370872 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.370936 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.370949 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.370973 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.370987 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.472816 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.472878 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.472895 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.472922 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.472941 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.576510 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.576571 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.576585 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.576610 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.576626 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.678896 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.678938 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.678948 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.678964 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.678975 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.781706 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.781741 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.781765 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.781780 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.781790 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.879627 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:51:33.356590289 +0000 UTC Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.884831 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.884872 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.884883 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.884898 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.884912 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.909173 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.909228 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:04 crc kubenswrapper[5007]: E0218 19:39:04.909332 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:04 crc kubenswrapper[5007]: E0218 19:39:04.909509 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.909173 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:04 crc kubenswrapper[5007]: E0218 19:39:04.909658 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.987665 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.987731 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.987753 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.987779 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:04 crc kubenswrapper[5007]: I0218 19:39:04.987799 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:04Z","lastTransitionTime":"2026-02-18T19:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.090880 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.090947 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.090965 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.091013 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.091032 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.105387 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerStarted","Data":"420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.111313 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.130173 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.144974 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.159586 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.176204 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.192311 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.193198 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.193247 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.193266 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.193290 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.193307 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.209498 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.231167 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.246793 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.262123 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.276369 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.292872 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.296415 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.296466 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.296476 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.296491 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.296501 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.310900 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.338631 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.354067 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.399595 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.399641 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.399653 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.399671 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.399682 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.502620 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.502660 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.502670 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.502690 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.502701 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.605015 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.605376 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.605635 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.605818 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.605989 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.708878 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.708928 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.708940 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.708957 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.708968 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.732449 5007 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.811811 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.811850 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.811860 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.811874 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.811883 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.880004 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:16:05.045242084 +0000 UTC Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.913845 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.913981 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.914044 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.914134 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.914189 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:05Z","lastTransitionTime":"2026-02-18T19:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.926439 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.938647 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.959802 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:05 crc kubenswrapper[5007]: I0218 19:39:05.972659 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.006340 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.018214 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.018279 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.018297 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.018324 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.018343 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.021803 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.032284 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.046002 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.059746 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.074769 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.091215 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.106956 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.120177 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.120214 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.120222 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.120237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.120249 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.120479 5007 generic.go:334] "Generic (PLEG): container finished" podID="8db715e4-a72b-4f74-b2e9-9d529e2b0a47" containerID="420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672" exitCode=0 Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.120529 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerDied","Data":"420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.124825 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.141152 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.157909 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.174086 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.189325 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.211606 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.223089 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.223123 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.223131 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.223145 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.223156 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.227546 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.246297 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.259573 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.273194 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.287288 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.307099 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.321515 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.325980 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.326397 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.326411 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.326443 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.326457 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.340619 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.354355 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.372040 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.428611 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.428645 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.428654 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.428671 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.428681 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.530543 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.530603 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.530621 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.530643 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.530660 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.632578 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.632625 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.632635 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.632651 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.632664 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.736771 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.736816 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.736827 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.736843 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.736854 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.839274 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.839320 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.839332 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.839347 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.839358 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.881032 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:14:37.440157535 +0000 UTC Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.908564 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.908588 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.908638 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:06 crc kubenswrapper[5007]: E0218 19:39:06.908689 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:06 crc kubenswrapper[5007]: E0218 19:39:06.908808 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:06 crc kubenswrapper[5007]: E0218 19:39:06.908883 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.941797 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.941833 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.941843 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.941857 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:06 crc kubenswrapper[5007]: I0218 19:39:06.941866 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:06Z","lastTransitionTime":"2026-02-18T19:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.044293 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.044332 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.044343 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.044359 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.044446 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.127809 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.128116 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.130879 5007 generic.go:334] "Generic (PLEG): container finished" podID="8db715e4-a72b-4f74-b2e9-9d529e2b0a47" containerID="ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9" exitCode=0 Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.130909 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerDied","Data":"ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.150973 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.151208 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.151229 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.151238 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.151251 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.151260 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.155096 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.165648 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.180233 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.197920 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.213378 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.228239 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.241872 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.253085 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.253128 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.253139 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.253168 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.253181 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.257153 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.272189 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.284183 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.296323 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.306518 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.320982 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.339667 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.352702 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.355299 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.355340 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.355352 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.355370 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.355383 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.368623 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.382731 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.396632 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.410260 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.424377 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.436316 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.450350 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.458398 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.458475 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.458497 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.458518 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.458534 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.463552 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.480870 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.494651 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.518073 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.534225 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.552391 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.561133 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.561222 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.561244 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.561272 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.561292 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.664126 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.664171 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.664183 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.664201 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.664212 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.766188 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.766223 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.766234 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.766250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.766261 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.868313 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.868344 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.868353 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.868366 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.868376 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.882066 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:14:54.816274734 +0000 UTC Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.970678 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.970732 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.970750 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.970775 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:07 crc kubenswrapper[5007]: I0218 19:39:07.970793 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:07Z","lastTransitionTime":"2026-02-18T19:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.073479 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.073527 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.073539 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.073557 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.073573 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.139101 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" event={"ID":"8db715e4-a72b-4f74-b2e9-9d529e2b0a47","Type":"ContainerStarted","Data":"13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.139192 5007 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.139687 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.157701 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.171253 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.176343 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.176379 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.176392 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.176408 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.176420 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.189540 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.191770 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.206512 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.225349 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.250244 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.265182 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.279372 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.279727 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.279773 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.279790 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.279813 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.279830 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.302506 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.317780 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.336384 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.357964 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.377546 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.382246 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.382303 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.382315 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.382332 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.382344 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.397267 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.412969 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.432752 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.452657 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.465846 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.478301 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.484131 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.484167 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.484181 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.484199 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.484212 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.493631 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.503734 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.508747 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.522355 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.552289 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.565021 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.577449 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.586975 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.587051 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.587065 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.587084 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.587098 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.590466 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.609337 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.625301 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.638736 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.651664 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.664278 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.678708 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.691482 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.691536 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.691551 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.691572 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.691585 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.692193 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.707772 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.719452 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.734376 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.747230 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.766571 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.781078 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.793992 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.794170 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.794315 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.794445 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.794540 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.795908 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.810830 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.829068 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.882616 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:48:18.583019462 +0000 UTC Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.897024 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.897053 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.897062 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.897078 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.897087 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.909457 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.909484 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:08 crc kubenswrapper[5007]: E0218 19:39:08.909570 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.909592 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:08 crc kubenswrapper[5007]: E0218 19:39:08.909700 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:08 crc kubenswrapper[5007]: E0218 19:39:08.909802 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.999228 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.999266 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.999275 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.999293 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:08 crc kubenswrapper[5007]: I0218 19:39:08.999303 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:08Z","lastTransitionTime":"2026-02-18T19:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.101184 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.101212 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.101219 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.102012 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.102034 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.142152 5007 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.204768 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.204860 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.204879 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.204902 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.204919 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.309043 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.309092 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.309105 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.309125 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.309142 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.412214 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.412245 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.412254 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.412266 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.412276 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.515084 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.515182 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.515206 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.515245 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.515269 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.630281 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.630340 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.630356 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.630381 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.630404 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.741105 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.741151 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.741163 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.741184 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.741197 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.844308 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.844345 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.844358 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.844387 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.844401 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.883738 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:05:37.663449596 +0000 UTC Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.947164 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.947195 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.947204 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.947217 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:09 crc kubenswrapper[5007]: I0218 19:39:09.947228 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:09Z","lastTransitionTime":"2026-02-18T19:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.050133 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.050171 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.050181 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.050196 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.050208 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.144718 5007 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.156765 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.156806 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.156820 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.156837 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.156850 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.259261 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.259301 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.259323 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.259337 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.259347 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.361207 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.361239 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.361267 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.361282 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.361293 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.463911 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.463956 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.463968 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.463984 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.463997 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.570241 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.570275 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.570287 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.570303 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.570316 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.671995 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.672355 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.672545 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.672687 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.672814 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.776127 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.776191 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.776208 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.776720 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.776779 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.878816 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.878853 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.878867 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.878884 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.878897 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.884119 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:32:18.851546858 +0000 UTC Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.908608 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.908638 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.908620 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:10 crc kubenswrapper[5007]: E0218 19:39:10.908763 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:10 crc kubenswrapper[5007]: E0218 19:39:10.908873 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:10 crc kubenswrapper[5007]: E0218 19:39:10.908967 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.981114 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.981150 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.981159 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.981174 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:10 crc kubenswrapper[5007]: I0218 19:39:10.981209 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:10Z","lastTransitionTime":"2026-02-18T19:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.007063 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j"] Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.008722 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.012126 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.013463 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.026610 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.037317 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.050229 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.060877 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.081641 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.083340 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.083396 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.083414 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.083472 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.083491 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.095843 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.096412 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4c08908-559a-4963-869b-17a19622586a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.096497 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4c08908-559a-4963-869b-17a19622586a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.096515 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sw9h\" (UniqueName: \"kubernetes.io/projected/a4c08908-559a-4963-869b-17a19622586a-kube-api-access-6sw9h\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.096548 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4c08908-559a-4963-869b-17a19622586a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.109605 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.121685 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.134734 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.150080 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/0.log" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.151117 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.153155 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf" exitCode=1 Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.153191 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.153800 5007 scope.go:117] "RemoveContainer" containerID="3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.165772 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.177970 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.186214 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.186250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.186260 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.186275 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.186287 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.191671 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.197958 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4c08908-559a-4963-869b-17a19622586a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.198027 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4c08908-559a-4963-869b-17a19622586a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.198086 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4c08908-559a-4963-869b-17a19622586a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.198109 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sw9h\" (UniqueName: \"kubernetes.io/projected/a4c08908-559a-4963-869b-17a19622586a-kube-api-access-6sw9h\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.198703 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4c08908-559a-4963-869b-17a19622586a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.198874 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4c08908-559a-4963-869b-17a19622586a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.205097 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.207478 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4c08908-559a-4963-869b-17a19622586a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.216917 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sw9h\" (UniqueName: \"kubernetes.io/projected/a4c08908-559a-4963-869b-17a19622586a-kube-api-access-6sw9h\") pod \"ovnkube-control-plane-749d76644c-9bf5j\" (UID: \"a4c08908-559a-4963-869b-17a19622586a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.218374 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.232701 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.248251 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.261182 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.270669 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.280451 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.288211 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.288242 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.288251 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.288264 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.288273 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.293960 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.305960 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.317946 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.321198 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.335370 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:10Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:10.937932 6298 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:10.937966 6298 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:10.937990 6298 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:39:10.937996 6298 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:39:10.937999 6298 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:39:10.938064 6298 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:39:10.938068 6298 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:39:10.938077 6298 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:39:10.938088 6298 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:39:10.938101 6298 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:39:10.938104 6298 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:39:10.938112 6298 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:39:10.938123 6298 factory.go:656] Stopping watch factory\\\\nI0218 19:39:10.938139 6298 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:39:10.938149 6298 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: W0218 19:39:11.336572 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4c08908_559a_4963_869b_17a19622586a.slice/crio-b0266001668ad21060381786563e601263230893f1c3981ae6885a4f7ecceb6e WatchSource:0}: Error finding container b0266001668ad21060381786563e601263230893f1c3981ae6885a4f7ecceb6e: Status 404 returned error can't find the container with id b0266001668ad21060381786563e601263230893f1c3981ae6885a4f7ecceb6e Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.352930 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.370404 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.386001 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.391181 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.391233 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.391250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.391272 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.391292 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.399872 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.419494 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.431443 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.494108 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.494163 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.494177 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.494194 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.494204 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.597422 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.597476 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.597487 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.597504 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.597518 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.700511 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.700569 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.700580 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.700599 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.700610 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.741654 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-96xnl"] Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.742185 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.742253 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.757303 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.774524 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.788762 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.803514 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.803548 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.803561 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.803579 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.803590 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.804964 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.805118 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:27.80510195 +0000 UTC m=+52.587221464 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.812911 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.827636 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.844946 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.860301 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.876771 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.884516 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:57:30.578743133 +0000 UTC Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.887486 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.901011 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905496 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905534 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905556 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905586 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcgww\" (UniqueName: \"kubernetes.io/projected/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-kube-api-access-qcgww\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905610 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905751 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905781 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905782 5007 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905631 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905833 5007 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905791 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905875 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:27.90585084 +0000 UTC m=+52.687970434 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905904 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905923 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905932 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905944 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905907 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:27.905891621 +0000 UTC m=+52.688011285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905796 5007 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.905952 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:11Z","lastTransitionTime":"2026-02-18T19:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.905990 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:27.905975683 +0000 UTC m=+52.688095207 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.906186 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.906206 5007 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:11 crc kubenswrapper[5007]: E0218 19:39:11.906256 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:27.906243081 +0000 UTC m=+52.688362605 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.918064 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.930068 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.942079 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.961135 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:10Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:10.937932 6298 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:10.937966 6298 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:10.937990 6298 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:39:10.937996 6298 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:39:10.937999 6298 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:39:10.938064 6298 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:39:10.938068 6298 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:39:10.938077 6298 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:39:10.938088 6298 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:39:10.938101 6298 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:39:10.938104 6298 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:39:10.938112 6298 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:39:10.938123 6298 factory.go:656] Stopping watch factory\\\\nI0218 19:39:10.938139 6298 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:39:10.938149 6298 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.974363 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:11 crc kubenswrapper[5007]: I0218 19:39:11.990517 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.007020 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.007111 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcgww\" (UniqueName: \"kubernetes.io/projected/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-kube-api-access-qcgww\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.007345 5007 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.007394 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs podName:7f72bbfc-96e8-4151-9291-bb3da55e7d2b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:12.507381132 +0000 UTC m=+37.289500656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs") pod "network-metrics-daemon-96xnl" (UID: "7f72bbfc-96e8-4151-9291-bb3da55e7d2b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.008468 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.008497 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.008509 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.008526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.008538 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.031173 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcgww\" (UniqueName: \"kubernetes.io/projected/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-kube-api-access-qcgww\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.111044 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.111096 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.111110 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.111128 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.111140 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.158565 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/0.log" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.161488 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.161790 5007 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.163135 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" event={"ID":"a4c08908-559a-4963-869b-17a19622586a","Type":"ContainerStarted","Data":"ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.163188 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" event={"ID":"a4c08908-559a-4963-869b-17a19622586a","Type":"ContainerStarted","Data":"d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.163200 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" event={"ID":"a4c08908-559a-4963-869b-17a19622586a","Type":"ContainerStarted","Data":"b0266001668ad21060381786563e601263230893f1c3981ae6885a4f7ecceb6e"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.180509 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.198678 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.213959 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.214126 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.214164 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.214407 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.214515 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.214614 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.228775 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.239113 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.251191 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.264677 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.274604 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.286215 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.300613 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.317026 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.317075 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.317086 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.317102 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.317115 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.321812 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.335514 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.354329 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:10Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:10.937932 6298 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:10.937966 6298 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:10.937990 6298 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:39:10.937996 6298 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:39:10.937999 6298 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:39:10.938064 6298 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:39:10.938068 6298 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:39:10.938077 6298 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:39:10.938088 6298 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:39:10.938101 6298 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:39:10.938104 6298 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:39:10.938112 6298 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:39:10.938123 6298 factory.go:656] Stopping watch factory\\\\nI0218 19:39:10.938139 6298 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:39:10.938149 6298 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.366674 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.378108 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.390352 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.404590 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.419861 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.419918 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.419930 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.419950 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.419964 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.422112 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.439365 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.454228 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.469730 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.483677 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.498450 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.513310 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.513453 5007 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.513503 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs podName:7f72bbfc-96e8-4151-9291-bb3da55e7d2b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:13.513490467 +0000 UTC m=+38.295609991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs") pod "network-metrics-daemon-96xnl" (UID: "7f72bbfc-96e8-4151-9291-bb3da55e7d2b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.514124 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.522340 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.522379 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.522392 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.522409 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.522419 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.527266 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.541934 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.558317 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.560183 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.560223 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.560233 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.560250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.560262 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.572813 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.577568 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.582045 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.582082 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.582095 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.582115 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.582128 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.584932 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.596196 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.600011 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.600073 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.600086 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.600107 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.600121 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.607920 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:10Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:10.937932 6298 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:10.937966 6298 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:10.937990 6298 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:39:10.937996 6298 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:39:10.937999 6298 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:39:10.938064 6298 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:39:10.938068 6298 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:39:10.938077 6298 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:39:10.938088 6298 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:39:10.938101 6298 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:39:10.938104 6298 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:39:10.938112 6298 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:39:10.938123 6298 factory.go:656] Stopping watch factory\\\\nI0218 19:39:10.938139 6298 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:39:10.938149 6298 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.612511 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.616826 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.616868 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.616878 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.616893 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.616904 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.621057 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.630695 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.635480 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.636021 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.636066 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.636076 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.636093 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.636104 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.650181 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.650288 5007 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.652391 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.652472 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.652485 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.652506 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.652520 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.755423 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.755538 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.755558 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.755587 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.755606 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.859090 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.859136 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.859152 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.859189 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.859201 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.885062 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:15:07.509412472 +0000 UTC Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.909634 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.909696 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.909750 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.909710 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.909813 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.909999 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.910049 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:12 crc kubenswrapper[5007]: E0218 19:39:12.910110 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.963302 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.963366 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.963385 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.963410 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:12 crc kubenswrapper[5007]: I0218 19:39:12.963458 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:12Z","lastTransitionTime":"2026-02-18T19:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.065298 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.065368 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.065385 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.065410 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.065426 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.167750 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.168181 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.168206 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.168237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.168260 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.170760 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/1.log" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.171605 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/0.log" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.174380 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe" exitCode=1 Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.174412 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.174459 5007 scope.go:117] "RemoveContainer" containerID="3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.175400 5007 scope.go:117] "RemoveContainer" containerID="d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe" Feb 18 19:39:13 crc kubenswrapper[5007]: E0218 19:39:13.175607 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.194167 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.213407 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.247370 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:10Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:10.937932 6298 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:10.937966 6298 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:10.937990 6298 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:39:10.937996 6298 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:39:10.937999 6298 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:39:10.938064 6298 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:39:10.938068 6298 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:39:10.938077 6298 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:39:10.938088 6298 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:39:10.938101 6298 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:39:10.938104 6298 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:39:10.938112 6298 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:39:10.938123 6298 factory.go:656] Stopping watch factory\\\\nI0218 19:39:10.938139 6298 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:39:10.938149 6298 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.182805 6449 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:12.182833 6449 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.183030 6449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:12.184231 6449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.186285 6449 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:12.186347 6449 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:12.186365 6449 factory.go:656] Stopping watch factory\\\\nI0218 19:39:12.186379 6449 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:39:12.186410 6449 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.261505 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.270912 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.270961 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.270977 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.270996 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.271009 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.276908 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.291391 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.312608 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.334742 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.351663 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.364081 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.374311 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.374371 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.374399 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.374425 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.374494 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.379369 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.396677 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.414886 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.426986 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.438287 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.462722 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.477796 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.477843 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.478048 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.478070 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.478089 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.523488 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:13 crc kubenswrapper[5007]: E0218 19:39:13.523721 5007 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:13 crc kubenswrapper[5007]: E0218 19:39:13.523833 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs podName:7f72bbfc-96e8-4151-9291-bb3da55e7d2b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:15.523804293 +0000 UTC m=+40.305923857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs") pod "network-metrics-daemon-96xnl" (UID: "7f72bbfc-96e8-4151-9291-bb3da55e7d2b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.580559 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.580636 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.580655 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.581134 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.581208 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.684056 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.684092 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.684104 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.684121 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.684134 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.787047 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.787121 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.787139 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.787164 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.787186 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.886191 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:19:40.523570827 +0000 UTC Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.890165 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.890225 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.890244 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.890270 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.890289 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.994371 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.994463 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.994482 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.994510 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:13 crc kubenswrapper[5007]: I0218 19:39:13.994527 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:13Z","lastTransitionTime":"2026-02-18T19:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.097340 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.097390 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.097402 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.097419 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.097450 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.187267 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/1.log" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.200039 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.200103 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.200123 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.200145 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.200162 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.303277 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.303328 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.303345 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.303362 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.303376 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.406111 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.406515 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.406731 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.407086 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.407227 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.510920 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.510995 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.511013 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.511039 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.511062 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.613824 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.613898 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.613920 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.613953 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.613976 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.716343 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.716386 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.716395 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.716409 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.716419 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.819458 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.819513 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.819530 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.819562 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.819579 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.886701 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:07:22.98846883 +0000 UTC Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.908928 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.908994 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.909015 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.909128 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:14 crc kubenswrapper[5007]: E0218 19:39:14.909241 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:14 crc kubenswrapper[5007]: E0218 19:39:14.909324 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:14 crc kubenswrapper[5007]: E0218 19:39:14.909394 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:14 crc kubenswrapper[5007]: E0218 19:39:14.909466 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.921874 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.921900 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.921908 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.921937 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:14 crc kubenswrapper[5007]: I0218 19:39:14.921946 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:14Z","lastTransitionTime":"2026-02-18T19:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.024270 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.024311 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.024349 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.024366 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.024418 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.127243 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.127291 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.127299 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.127319 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.127329 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.229981 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.230021 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.230030 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.230050 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.230062 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.332480 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.332546 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.332565 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.332591 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.332608 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.435654 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.435726 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.435760 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.435789 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.435807 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.538919 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.538960 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.538969 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.538991 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.539002 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.548807 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:15 crc kubenswrapper[5007]: E0218 19:39:15.549004 5007 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:15 crc kubenswrapper[5007]: E0218 19:39:15.549074 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs podName:7f72bbfc-96e8-4151-9291-bb3da55e7d2b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:19.549057869 +0000 UTC m=+44.331177393 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs") pod "network-metrics-daemon-96xnl" (UID: "7f72bbfc-96e8-4151-9291-bb3da55e7d2b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.645503 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.645573 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.645593 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.645620 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.645638 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.748610 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.748672 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.748685 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.748705 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.748719 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.850848 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.850880 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.850888 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.850900 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.850909 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.887455 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:04:17.267840219 +0000 UTC Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.930819 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.949927 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.954085 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.954131 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.954143 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.954163 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.954176 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:15Z","lastTransitionTime":"2026-02-18T19:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.974748 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:15 crc kubenswrapper[5007]: I0218 19:39:15.995792 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.016218 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.034312 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.048325 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.056587 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.056607 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.056616 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.056630 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.056639 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.065189 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.088159 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3be312d480e0e9e2b08da18e4d7caa915bc845dec568b15801716bd238001dcf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:10Z\\\",\\\"message\\\":\\\"olicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:10.937932 6298 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:10.937966 6298 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:10.937990 6298 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:39:10.937996 6298 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:39:10.937999 6298 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:39:10.938064 6298 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:39:10.938068 6298 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:39:10.938077 6298 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:39:10.938088 6298 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:39:10.938101 6298 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:39:10.938104 6298 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:39:10.938112 6298 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:39:10.938123 6298 factory.go:656] Stopping watch factory\\\\nI0218 19:39:10.938139 6298 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:39:10.938149 6298 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.182805 6449 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:12.182833 6449 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.183030 6449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:12.184231 6449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.186285 6449 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:12.186347 6449 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:12.186365 6449 factory.go:656] Stopping watch factory\\\\nI0218 19:39:12.186379 6449 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:39:12.186410 6449 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.103648 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.119517 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.134294 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.148466 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.159900 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.159928 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.159938 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.159952 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.159963 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.164034 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.177970 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.192259 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.262888 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.262979 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.262995 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.263018 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.263039 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.365289 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.365326 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.365354 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.365372 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.365385 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.467604 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.467655 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.467672 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.467695 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.467713 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.570252 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.570334 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.570359 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.570389 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.570410 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.672757 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.672831 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.672848 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.672893 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.672911 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.776185 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.776229 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.776243 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.776684 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.776829 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.879264 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.879591 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.879787 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.879941 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.880081 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.887599 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:28:19.914550332 +0000 UTC Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.909196 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.909265 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:16 crc kubenswrapper[5007]: E0218 19:39:16.909340 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.909418 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:16 crc kubenswrapper[5007]: E0218 19:39:16.909524 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.909273 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:16 crc kubenswrapper[5007]: E0218 19:39:16.909586 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:16 crc kubenswrapper[5007]: E0218 19:39:16.909663 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.983056 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.983120 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.983133 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.983155 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:16 crc kubenswrapper[5007]: I0218 19:39:16.983168 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:16Z","lastTransitionTime":"2026-02-18T19:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.085764 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.085819 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.085830 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.085847 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.085859 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.105087 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.106562 5007 scope.go:117] "RemoveContainer" containerID="d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe" Feb 18 19:39:17 crc kubenswrapper[5007]: E0218 19:39:17.106916 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.124975 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.142813 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.155802 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.174561 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.188989 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.189044 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.189056 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.189078 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.189094 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.192563 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.217299 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.233799 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.269969 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.182805 6449 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:12.182833 6449 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.183030 6449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:12.184231 6449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.186285 6449 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:12.186347 6449 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:12.186365 6449 factory.go:656] Stopping watch factory\\\\nI0218 19:39:12.186379 6449 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:39:12.186410 6449 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.285166 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.292190 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.292247 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.292265 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.292289 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.292308 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.303030 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.317851 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.331650 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.345838 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.359981 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.374325 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.388098 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.395616 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.395657 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.395670 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.395688 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.395701 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.498572 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.498617 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.498626 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.498641 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.498651 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.601422 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.601469 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.601505 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.601522 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.601534 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.703591 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.703635 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.703676 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.703749 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.703768 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.805669 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.805721 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.805741 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.805761 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.805774 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.888534 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:25:14.145982761 +0000 UTC Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.908677 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.908721 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.908731 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.908750 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:17 crc kubenswrapper[5007]: I0218 19:39:17.908767 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:17Z","lastTransitionTime":"2026-02-18T19:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.011217 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.011280 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.011302 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.011334 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.011355 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.114039 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.114092 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.114101 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.114113 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.114122 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.216410 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.216550 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.216561 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.216577 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.216586 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.319217 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.319299 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.319324 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.319357 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.319381 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.422314 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.422396 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.422415 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.422476 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.422495 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.525180 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.525233 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.525250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.525274 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.525291 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.628361 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.628488 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.628517 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.628545 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.628566 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.731373 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.731470 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.731491 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.731515 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.731537 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.834644 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.834678 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.834686 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.834699 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.834709 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.889381 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:21:27.290634637 +0000 UTC Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.908659 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.908703 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.908796 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.908866 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:18 crc kubenswrapper[5007]: E0218 19:39:18.908860 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:18 crc kubenswrapper[5007]: E0218 19:39:18.908930 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:18 crc kubenswrapper[5007]: E0218 19:39:18.909028 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:18 crc kubenswrapper[5007]: E0218 19:39:18.909100 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.938266 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.938309 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.938327 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.938349 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:18 crc kubenswrapper[5007]: I0218 19:39:18.938366 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:18Z","lastTransitionTime":"2026-02-18T19:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.040839 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.040910 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.040923 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.040939 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.040952 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.143769 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.143814 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.143823 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.143839 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.143848 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.246380 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.246426 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.246457 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.246472 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.246482 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.348665 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.348718 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.348743 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.348770 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.348791 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.451238 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.451275 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.451285 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.451304 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.451316 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.554982 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.555043 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.555065 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.555093 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.555113 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.598855 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:19 crc kubenswrapper[5007]: E0218 19:39:19.599044 5007 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:19 crc kubenswrapper[5007]: E0218 19:39:19.599146 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs podName:7f72bbfc-96e8-4151-9291-bb3da55e7d2b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:27.599116145 +0000 UTC m=+52.381235699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs") pod "network-metrics-daemon-96xnl" (UID: "7f72bbfc-96e8-4151-9291-bb3da55e7d2b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.657463 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.657512 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.657525 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.657541 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.657551 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.760117 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.760173 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.760191 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.760213 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.760233 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.862793 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.862821 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.862829 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.862845 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.862853 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.890286 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:33:20.407114702 +0000 UTC Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.966150 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.966214 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.966232 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.966273 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:19 crc kubenswrapper[5007]: I0218 19:39:19.966327 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:19Z","lastTransitionTime":"2026-02-18T19:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.068988 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.069037 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.069048 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.069063 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.069074 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.174368 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.174512 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.174533 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.174566 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.174586 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.277251 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.277296 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.277306 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.277323 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.277334 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.380786 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.380837 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.380849 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.380869 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.380883 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.484541 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.484640 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.484663 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.484699 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.484727 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.588085 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.588152 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.588170 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.588194 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.588212 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.692404 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.692758 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.692771 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.692788 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.692802 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.795304 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.795365 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.795375 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.795391 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.795400 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.890717 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:55:25.696318708 +0000 UTC Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.898463 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.898496 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.898506 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.898519 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.898531 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:20Z","lastTransitionTime":"2026-02-18T19:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.908931 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.909015 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.908938 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:20 crc kubenswrapper[5007]: I0218 19:39:20.908938 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:20 crc kubenswrapper[5007]: E0218 19:39:20.909129 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:20 crc kubenswrapper[5007]: E0218 19:39:20.909316 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:20 crc kubenswrapper[5007]: E0218 19:39:20.909398 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:20 crc kubenswrapper[5007]: E0218 19:39:20.909545 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.001506 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.001581 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.001602 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.001628 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.001647 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.104646 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.104736 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.104754 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.104781 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.104800 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.207986 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.208045 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.208062 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.208085 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.208099 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.310566 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.310620 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.310633 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.310652 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.310666 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.413088 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.413131 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.413143 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.413164 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.413176 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.515651 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.515692 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.515708 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.515725 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.515737 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.618498 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.618544 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.618555 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.618574 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.618586 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.720680 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.720710 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.720719 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.720731 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.720739 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.823323 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.823395 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.823412 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.823480 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.823518 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.891388 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:49:43.977042396 +0000 UTC Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.925924 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.925976 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.925985 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.926007 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:21 crc kubenswrapper[5007]: I0218 19:39:21.926020 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:21Z","lastTransitionTime":"2026-02-18T19:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.028102 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.028148 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.028159 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.028174 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.028185 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.130393 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.130471 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.130485 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.130505 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.130518 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.232827 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.232887 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.232907 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.232932 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.232950 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.336595 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.336663 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.336709 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.336728 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.336741 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.439506 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.439559 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.439583 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.439603 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.439621 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.542572 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.542832 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.542901 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.542982 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.543042 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.645237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.645622 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.645731 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.645824 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.645902 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.748477 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.748525 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.748537 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.748553 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.748566 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.851664 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.851709 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.851723 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.851746 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.851761 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.892490 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 03:18:58.405071696 +0000 UTC Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.909103 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.909137 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.909111 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:22 crc kubenswrapper[5007]: E0218 19:39:22.909265 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.909288 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:22 crc kubenswrapper[5007]: E0218 19:39:22.909384 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:22 crc kubenswrapper[5007]: E0218 19:39:22.909521 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:22 crc kubenswrapper[5007]: E0218 19:39:22.909651 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.910548 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.910584 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.910601 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.910621 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.910637 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: E0218 19:39:22.930784 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.940300 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.940347 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.940357 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.940375 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.940391 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: E0218 19:39:22.960743 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.965511 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.965563 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.965582 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.965604 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.965618 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:22 crc kubenswrapper[5007]: E0218 19:39:22.981898 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.986044 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.986088 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.986103 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.986119 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:22 crc kubenswrapper[5007]: I0218 19:39:22.986134 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:22Z","lastTransitionTime":"2026-02-18T19:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: E0218 19:39:23.000123 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.004916 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.004973 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.004990 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.005011 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.005022 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: E0218 19:39:23.019379 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:23 crc kubenswrapper[5007]: E0218 19:39:23.019524 5007 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.020992 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.021024 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.021033 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.021049 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.021058 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.352217 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.352285 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.352302 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.352324 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.352341 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.455195 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.455237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.455247 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.455260 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.455269 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.557892 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.557929 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.557939 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.557952 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.557962 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.660707 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.661042 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.661135 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.661232 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.661324 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.763969 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.764035 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.764048 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.764068 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.764082 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.868053 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.868323 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.868397 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.868502 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.868573 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.892586 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:30:51.7848022 +0000 UTC Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.970782 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.971043 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.971104 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.971179 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:23 crc kubenswrapper[5007]: I0218 19:39:23.971236 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:23Z","lastTransitionTime":"2026-02-18T19:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.074198 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.074250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.074260 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.074278 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.074288 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.177493 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.177569 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.177607 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.177650 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.177683 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.280158 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.280408 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.280547 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.280669 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.280849 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.383800 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.383842 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.383854 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.383871 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.383883 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.486936 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.486976 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.486987 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.487004 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.487016 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.590124 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.590198 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.590217 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.590247 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.590265 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.692891 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.692981 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.693015 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.693035 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.693047 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.795950 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.796012 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.796031 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.796055 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.796073 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.893338 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:24:37.616286051 +0000 UTC Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.898413 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.898756 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.898924 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.899078 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.899223 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:24Z","lastTransitionTime":"2026-02-18T19:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.908907 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.908907 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.909513 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:24 crc kubenswrapper[5007]: E0218 19:39:24.909556 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:24 crc kubenswrapper[5007]: I0218 19:39:24.909625 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:24 crc kubenswrapper[5007]: E0218 19:39:24.909633 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:24 crc kubenswrapper[5007]: E0218 19:39:24.910350 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:24 crc kubenswrapper[5007]: E0218 19:39:24.910568 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.002598 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.002659 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.002673 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.002699 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.002718 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.106415 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.106512 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.106529 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.106554 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.106611 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.208738 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.208796 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.208807 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.208844 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.208858 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.311426 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.311509 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.311520 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.311536 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.311548 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.414281 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.414335 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.414353 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.414376 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.414395 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.517084 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.517145 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.517159 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.517179 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.517193 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.620246 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.620291 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.620301 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.620317 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.620328 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.722933 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.722968 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.722977 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.722990 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.723000 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.825190 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.825233 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.825245 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.825261 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.825272 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.894203 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:02:54.027994024 +0000 UTC Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.926376 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.927657 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.927700 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.927715 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.927734 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.927747 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:25Z","lastTransitionTime":"2026-02-18T19:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.942201 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.973359 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.182805 6449 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:12.182833 6449 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.183030 6449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:12.184231 6449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.186285 6449 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:12.186347 6449 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:12.186365 6449 factory.go:656] Stopping watch factory\\\\nI0218 19:39:12.186379 6449 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:39:12.186410 6449 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:25 crc kubenswrapper[5007]: I0218 19:39:25.988950 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.010420 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.024659 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.029681 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.029721 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.029729 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.029744 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.029754 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.038832 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.053351 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.069580 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.077179 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.081669 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.086303 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.093967 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.108917 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.122798 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.132561 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.132620 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.132637 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.132662 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.132677 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.140038 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.154399 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.170768 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.183640 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.198387 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.212645 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.224567 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.236300 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.236352 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.236364 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.236382 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.236397 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.247794 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.182805 6449 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:12.182833 6449 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.183030 6449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:12.184231 6449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.186285 6449 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:12.186347 6449 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:12.186365 6449 factory.go:656] Stopping watch factory\\\\nI0218 19:39:12.186379 6449 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:39:12.186410 6449 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.259906 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.272918 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.286387 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.299912 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.317974 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.333229 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.338973 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.339027 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.339044 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.339066 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.339083 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.350143 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.365258 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.378800 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.391718 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.402289 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.414872 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.442048 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.442096 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.442114 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.442134 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.442148 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.543906 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.543992 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.544004 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.544019 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.544029 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.646831 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.646878 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.646890 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.646906 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.646916 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.749132 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.749172 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.749183 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.749195 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.749204 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.852046 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.852131 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.852155 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.852189 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.852212 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.895531 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:13:33.083603148 +0000 UTC Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.908958 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.908995 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.908999 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.908956 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:26 crc kubenswrapper[5007]: E0218 19:39:26.909155 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:26 crc kubenswrapper[5007]: E0218 19:39:26.909333 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:26 crc kubenswrapper[5007]: E0218 19:39:26.909420 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:26 crc kubenswrapper[5007]: E0218 19:39:26.909522 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.955645 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.955689 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.955698 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.955715 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:26 crc kubenswrapper[5007]: I0218 19:39:26.955726 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:26Z","lastTransitionTime":"2026-02-18T19:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.058790 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.058884 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.058908 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.058942 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.058961 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.160854 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.160883 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.160891 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.160904 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.160913 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.264099 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.264226 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.264249 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.264298 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.264337 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.366748 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.366809 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.366819 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.366832 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.366862 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.469355 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.469424 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.469528 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.469565 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.469601 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.571820 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.571934 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.571952 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.571975 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.571992 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.674382 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.674448 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.674461 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.674479 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.674495 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.685054 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.685180 5007 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.685240 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs podName:7f72bbfc-96e8-4151-9291-bb3da55e7d2b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:43.685225553 +0000 UTC m=+68.467345077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs") pod "network-metrics-daemon-96xnl" (UID: "7f72bbfc-96e8-4151-9291-bb3da55e7d2b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.777395 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.777501 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.777519 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.777544 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.777562 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.879638 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.879688 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.879700 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.879719 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.879732 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.887200 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.887397 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:39:59.887366761 +0000 UTC m=+84.669486355 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.896312 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:51:58.230898448 +0000 UTC Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.910208 5007 scope.go:117] "RemoveContainer" containerID="d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.983036 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.983075 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.983089 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.983104 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.983116 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:27Z","lastTransitionTime":"2026-02-18T19:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.988155 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.988227 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.988254 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:27 crc kubenswrapper[5007]: I0218 19:39:27.988285 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988314 5007 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988379 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:59.988360487 +0000 UTC m=+84.770480011 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988522 5007 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988624 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988646 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988657 5007 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988676 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988705 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988725 5007 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988656 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:59.988622755 +0000 UTC m=+84.770742319 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988797 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:59.98878124 +0000 UTC m=+84.770900864 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:27 crc kubenswrapper[5007]: E0218 19:39:27.988824 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:39:59.988812631 +0000 UTC m=+84.770932285 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.085592 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.085624 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.085635 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.085652 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.085663 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.187985 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.188021 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.188029 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.188052 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.188061 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.291006 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.291045 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.291056 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.291069 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.291078 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.372733 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/1.log" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.376202 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.376805 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.393336 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.393366 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.393375 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.393388 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.393399 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.395926 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.413886 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.440410 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.182805 6449 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:12.182833 6449 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.183030 6449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:12.184231 6449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.186285 6449 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:12.186347 6449 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:12.186365 6449 factory.go:656] Stopping watch factory\\\\nI0218 19:39:12.186379 6449 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:39:12.186410 6449 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.457287 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.473609 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.489175 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.495182 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.495213 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.495222 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.495235 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.495247 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.507616 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.521132 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.535706 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.551501 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.568197 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.584742 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.598159 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.598223 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.598237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.598260 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.598277 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.603157 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.614898 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.627834 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.640791 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.664298 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.699998 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.700031 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.700040 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.700053 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.700063 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.802119 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.802167 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.802180 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.802196 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.802209 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.897406 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:44:08.057163048 +0000 UTC Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.905069 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.905133 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.905155 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.905181 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.905199 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:28Z","lastTransitionTime":"2026-02-18T19:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.909391 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.909508 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:28 crc kubenswrapper[5007]: E0218 19:39:28.909559 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.909509 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:28 crc kubenswrapper[5007]: E0218 19:39:28.909672 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:28 crc kubenswrapper[5007]: I0218 19:39:28.909405 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:28 crc kubenswrapper[5007]: E0218 19:39:28.909789 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:28 crc kubenswrapper[5007]: E0218 19:39:28.909862 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.009124 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.009190 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.009202 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.009226 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.009241 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.111882 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.111939 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.111950 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.111979 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.112007 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.215480 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.215554 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.215568 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.215593 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.215609 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.319520 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.319560 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.319569 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.319585 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.319597 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.381172 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/2.log" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.381868 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/1.log" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.383780 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72" exitCode=1 Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.383823 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.383867 5007 scope.go:117] "RemoveContainer" containerID="d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.384679 5007 scope.go:117] "RemoveContainer" containerID="e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72" Feb 18 19:39:29 crc kubenswrapper[5007]: E0218 19:39:29.384833 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.413925 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.421989 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.422058 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.422077 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.422091 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.422102 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.444343 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d65e2794f928a71759e0c9845e2df1c8b57e5837c10f2fd9aeea6887ccea9afe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.182805 6449 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:12.182833 6449 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.183030 6449 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:12.184231 6449 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:12.186285 6449 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:39:12.186347 6449 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:39:12.186365 6449 factory.go:656] Stopping watch factory\\\\nI0218 19:39:12.186379 6449 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:39:12.186410 6449 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.456452 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.467781 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.481076 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.497716 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.512525 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.524431 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.524484 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.524493 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.524508 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.524517 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.525932 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.539603 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.550079 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.561314 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.573159 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.587541 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.601937 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.616126 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.626504 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.626557 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.626572 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.626591 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.626604 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.632337 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.644883 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.729301 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.729340 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.729350 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.729365 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.729376 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.832556 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.832640 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.832657 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.832678 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.832691 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.898179 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:45:05.25829675 +0000 UTC Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.935309 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.935360 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.935369 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.935385 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:29 crc kubenswrapper[5007]: I0218 19:39:29.935396 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:29Z","lastTransitionTime":"2026-02-18T19:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.039377 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.039419 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.039445 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.039462 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.039474 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.142388 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.142467 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.142482 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.142502 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.142514 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.251800 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.252332 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.252391 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.252460 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.252567 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.356905 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.356964 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.356980 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.357001 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.357016 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.391489 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/2.log" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.397146 5007 scope.go:117] "RemoveContainer" containerID="e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72" Feb 18 19:39:30 crc kubenswrapper[5007]: E0218 19:39:30.397595 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.413701 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.437599 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.460630 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.460775 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.460855 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.460880 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.460937 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.462364 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.488008 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.504693 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.524566 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.540409 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.555316 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.564377 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.564420 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.564450 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.564468 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.564481 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.570484 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.590790 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.603099 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.619892 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.636957 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.653110 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.663904 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.666912 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.666966 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.666979 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.667000 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.667013 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.675869 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.689272 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.770550 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.770620 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.770644 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.770673 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.770691 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.874189 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.874254 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.874265 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.874290 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.874303 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.899051 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:17:21.930863518 +0000 UTC Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.908781 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.908817 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.908854 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.908794 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:30 crc kubenswrapper[5007]: E0218 19:39:30.908977 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:30 crc kubenswrapper[5007]: E0218 19:39:30.909086 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:30 crc kubenswrapper[5007]: E0218 19:39:30.909171 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:30 crc kubenswrapper[5007]: E0218 19:39:30.909402 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.977406 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.977664 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.977787 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.977884 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:30 crc kubenswrapper[5007]: I0218 19:39:30.977957 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:30Z","lastTransitionTime":"2026-02-18T19:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.082078 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.082168 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.082184 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.082213 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.082231 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.185058 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.185117 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.185131 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.185149 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.185163 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.288264 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.288359 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.288376 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.288396 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.288412 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.392473 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.392531 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.392544 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.392566 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.392580 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.495841 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.495912 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.495937 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.495965 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.495980 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.599355 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.599427 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.599483 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.599511 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.599532 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.703452 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.703506 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.703519 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.703541 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.703553 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.807375 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.807516 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.807543 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.807574 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.807592 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.900515 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:35:50.128243117 +0000 UTC Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.912632 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.912958 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.913143 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.913271 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:31 crc kubenswrapper[5007]: I0218 19:39:31.913380 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:31Z","lastTransitionTime":"2026-02-18T19:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.017183 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.017306 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.017326 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.017859 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.018094 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.121052 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.121117 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.121136 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.121162 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.121180 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.224619 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.224697 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.224719 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.224754 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.224781 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.327781 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.327875 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.327896 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.327924 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.327943 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.431493 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.431576 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.431603 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.431635 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.431659 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.535582 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.535669 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.535685 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.535706 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.535721 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.639254 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.639330 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.639349 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.639371 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.639385 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.742871 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.742926 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.742935 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.742950 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.742960 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.846566 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.846626 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.846639 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.846658 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.846671 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.901901 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:21:45.280681344 +0000 UTC Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.909400 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.909468 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.909525 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.909645 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:32 crc kubenswrapper[5007]: E0218 19:39:32.909755 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:32 crc kubenswrapper[5007]: E0218 19:39:32.909912 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:32 crc kubenswrapper[5007]: E0218 19:39:32.910106 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:32 crc kubenswrapper[5007]: E0218 19:39:32.910167 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.949459 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.949526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.949551 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.949587 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:32 crc kubenswrapper[5007]: I0218 19:39:32.949616 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:32Z","lastTransitionTime":"2026-02-18T19:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.053228 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.053295 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.053308 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.053327 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.053338 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.155940 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.155983 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.155993 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.156010 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.156022 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.210783 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.210835 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.210847 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.210867 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.210883 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: E0218 19:39:33.226628 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.232126 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.232179 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.232195 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.232214 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.232227 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: E0218 19:39:33.247793 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.252966 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.253060 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.253087 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.253148 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.253170 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: E0218 19:39:33.273063 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.277599 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.277633 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.277644 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.277659 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.277671 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: E0218 19:39:33.294759 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.298739 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.298790 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.298808 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.298831 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.298846 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: E0218 19:39:33.315238 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:33 crc kubenswrapper[5007]: E0218 19:39:33.315412 5007 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.316671 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.316737 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.316758 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.316790 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.316810 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.420693 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.420804 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.420826 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.420858 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.420878 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.523546 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.523595 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.523610 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.523629 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.523645 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.627563 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.627614 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.627627 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.627648 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.627661 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.730856 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.730954 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.730987 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.731024 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.731056 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.834749 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.834811 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.834824 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.834846 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.834861 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.903035 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:11:07.370611548 +0000 UTC Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.938466 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.938569 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.938598 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.938641 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:33 crc kubenswrapper[5007]: I0218 19:39:33.938665 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:33Z","lastTransitionTime":"2026-02-18T19:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.042571 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.042663 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.042687 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.042718 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.042739 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.145830 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.145888 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.145905 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.145928 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.145953 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.250237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.250359 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.250385 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.250420 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.250481 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.353372 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.353423 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.353453 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.353473 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.353489 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.456586 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.456664 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.456676 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.456699 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.456715 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.559593 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.559674 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.559701 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.559738 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.559765 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.662835 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.662956 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.662975 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.663002 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.663021 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.765790 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.765834 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.765843 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.765861 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.765874 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.869289 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.869347 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.869363 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.869389 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.869407 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.903884 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:17:54.584090923 +0000 UTC Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.909340 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:34 crc kubenswrapper[5007]: E0218 19:39:34.909534 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.909647 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.909680 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:34 crc kubenswrapper[5007]: E0218 19:39:34.909778 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.909807 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:34 crc kubenswrapper[5007]: E0218 19:39:34.909929 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:34 crc kubenswrapper[5007]: E0218 19:39:34.910024 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.972847 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.972897 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.972909 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.972947 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:34 crc kubenswrapper[5007]: I0218 19:39:34.972961 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:34Z","lastTransitionTime":"2026-02-18T19:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.076546 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.076601 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.076616 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.076638 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.076655 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.180019 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.180063 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.180077 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.180094 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.180105 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.283006 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.283084 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.283093 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.283106 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.283115 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.385662 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.385761 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.385788 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.385857 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.385890 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.488587 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.488668 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.488699 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.488728 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.488750 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.591680 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.591735 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.591749 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.591768 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.591781 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.695185 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.695229 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.695242 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.695258 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.695270 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.798071 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.798154 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.798178 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.798206 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.798228 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.900779 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.900822 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.900832 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.900844 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.900855 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:35Z","lastTransitionTime":"2026-02-18T19:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.904868 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:34:13.753996419 +0000 UTC Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.925108 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.938398 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.951946 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.966161 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.978313 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:35 crc kubenswrapper[5007]: I0218 19:39:35.993222 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.004367 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.004406 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.004418 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.004465 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.004487 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.015191 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.030149 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.045101 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.057552 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.074000 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.092181 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.106504 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.106634 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.106721 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.106811 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.106904 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.108717 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.123923 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.146108 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.157118 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.167269 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.208736 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.208771 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.208781 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.208797 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.208809 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.311504 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.311543 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.311552 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.311564 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.311572 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.414608 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.414666 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.414684 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.414711 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.414729 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.518520 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.518595 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.518621 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.518651 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.518674 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.622203 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.622659 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.622751 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.622854 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.622951 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.727938 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.728751 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.728904 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.729056 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.729197 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.833202 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.833259 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.833270 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.833288 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.833300 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.905515 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:36:01.552648182 +0000 UTC Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.909546 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.909663 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.909815 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.909970 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:36 crc kubenswrapper[5007]: E0218 19:39:36.910037 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:36 crc kubenswrapper[5007]: E0218 19:39:36.910283 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:36 crc kubenswrapper[5007]: E0218 19:39:36.910600 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:36 crc kubenswrapper[5007]: E0218 19:39:36.910704 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.935997 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.936037 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.936050 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.936068 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:36 crc kubenswrapper[5007]: I0218 19:39:36.936085 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:36Z","lastTransitionTime":"2026-02-18T19:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.038723 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.038760 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.038769 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.038786 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.038797 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.141996 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.142171 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.142190 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.142214 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.142232 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.245720 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.245790 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.245808 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.245841 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.245861 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.349281 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.349341 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.349353 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.349372 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.349384 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.452011 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.452352 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.452524 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.452667 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.452799 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.556310 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.556684 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.556843 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.556992 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.557128 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.660989 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.661046 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.661067 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.661094 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.661114 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.764823 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.764899 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.764925 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.764960 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.764982 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.868808 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.868893 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.868911 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.868942 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.868966 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.906634 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:35:52.63128777 +0000 UTC Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.972664 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.972739 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.972761 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.972793 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:37 crc kubenswrapper[5007]: I0218 19:39:37.972815 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:37Z","lastTransitionTime":"2026-02-18T19:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.076271 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.076335 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.076354 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.076379 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.076399 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.178817 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.178880 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.178891 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.178906 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.178916 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.282409 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.282547 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.282576 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.282614 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.282650 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.385299 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.385361 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.385374 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.385394 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.385409 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.489010 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.489075 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.489092 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.489121 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.489142 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.591497 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.591548 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.591558 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.591578 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.591590 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.694449 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.694490 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.694499 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.694512 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.694526 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.797018 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.797063 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.797074 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.797090 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.797101 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.900625 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.900739 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.900768 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.900799 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.900818 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:38Z","lastTransitionTime":"2026-02-18T19:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.907844 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:29:37.11004341 +0000 UTC Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.909253 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.909292 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.909494 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:38 crc kubenswrapper[5007]: I0218 19:39:38.909292 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:38 crc kubenswrapper[5007]: E0218 19:39:38.909819 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:38 crc kubenswrapper[5007]: E0218 19:39:38.909955 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:38 crc kubenswrapper[5007]: E0218 19:39:38.909683 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:38 crc kubenswrapper[5007]: E0218 19:39:38.910145 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.003615 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.003682 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.003700 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.003728 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.003777 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.106500 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.106580 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.106619 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.106654 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.106679 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.210568 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.210668 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.210704 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.210741 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.210768 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.313358 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.313446 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.313460 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.313486 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.313502 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.416387 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.416453 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.416464 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.416483 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.416496 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.522586 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.522652 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.522664 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.522686 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.522705 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.626139 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.626223 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.626242 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.626273 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.626292 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.728554 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.728613 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.728627 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.728646 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.728662 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.831841 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.831896 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.831909 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.831930 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.831942 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.908272 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:30:59.305925219 +0000 UTC Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.934912 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.935008 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.935026 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.935053 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:39 crc kubenswrapper[5007]: I0218 19:39:39.935076 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:39Z","lastTransitionTime":"2026-02-18T19:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.037820 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.037892 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.037911 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.037937 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.037954 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.140969 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.141041 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.141060 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.141088 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.141109 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.244221 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.244278 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.244290 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.244311 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.244324 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.347310 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.347350 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.347364 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.347383 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.347393 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.450255 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.450326 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.450345 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.450375 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.450399 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.553101 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.553155 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.553166 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.553181 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.553191 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.668217 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.668286 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.668304 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.668331 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.668351 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.771303 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.771343 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.771352 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.771369 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.771380 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.873789 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.873858 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.873870 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.873889 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.873902 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.909193 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:17:11.610369589 +0000 UTC Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.909476 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.909562 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:40 crc kubenswrapper[5007]: E0218 19:39:40.909610 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.909824 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:40 crc kubenswrapper[5007]: E0218 19:39:40.909838 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:40 crc kubenswrapper[5007]: E0218 19:39:40.909890 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.909993 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:40 crc kubenswrapper[5007]: E0218 19:39:40.910050 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.977330 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.977380 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.977391 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.977413 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:40 crc kubenswrapper[5007]: I0218 19:39:40.977444 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:40Z","lastTransitionTime":"2026-02-18T19:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.080381 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.080478 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.080496 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.080523 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.080543 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.182872 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.182945 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.182970 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.183006 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.183025 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.285147 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.285189 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.285198 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.285215 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.285226 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.388954 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.389018 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.389040 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.389072 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.389128 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.492016 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.492096 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.492119 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.492152 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.492186 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.594575 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.594653 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.594668 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.594692 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.594706 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.697464 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.697514 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.697526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.697546 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.697559 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.800328 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.800375 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.800386 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.800417 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.800445 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.903503 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.903545 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.903558 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.903575 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.903589 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:41Z","lastTransitionTime":"2026-02-18T19:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:41 crc kubenswrapper[5007]: I0218 19:39:41.909797 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:45:44.154299112 +0000 UTC Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.006293 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.006378 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.006406 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.006476 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.006506 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.108497 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.108540 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.108553 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.108570 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.108583 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.211263 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.211353 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.211387 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.211425 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.211504 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.314299 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.314389 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.314416 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.314487 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.314518 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.417460 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.417515 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.417528 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.417547 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.417560 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.520202 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.520260 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.520272 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.520288 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.520322 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.622347 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.622401 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.622413 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.622446 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.622496 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.725085 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.725128 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.725140 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.725157 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.725169 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.826994 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.827035 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.827046 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.827063 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.827077 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.909143 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.909190 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.909205 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.909292 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:42 crc kubenswrapper[5007]: E0218 19:39:42.909500 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:42 crc kubenswrapper[5007]: E0218 19:39:42.909571 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:42 crc kubenswrapper[5007]: E0218 19:39:42.909415 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.909918 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:14:25.044088957 +0000 UTC Feb 18 19:39:42 crc kubenswrapper[5007]: E0218 19:39:42.910178 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.910373 5007 scope.go:117] "RemoveContainer" containerID="e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72" Feb 18 19:39:42 crc kubenswrapper[5007]: E0218 19:39:42.910647 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.929218 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.929257 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.929275 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.929296 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:42 crc kubenswrapper[5007]: I0218 19:39:42.929314 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:42Z","lastTransitionTime":"2026-02-18T19:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.031787 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.031845 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.031860 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.031880 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.031893 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.134092 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.134138 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.134163 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.134183 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.134197 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.237147 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.237209 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.237221 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.237237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.237247 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.339250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.339294 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.339304 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.339320 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.339333 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.441116 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.441185 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.441208 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.441237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.441259 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.442376 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.442457 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.442482 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.442509 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.442529 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: E0218 19:39:43.456168 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.460342 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.460479 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.460565 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.460628 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.460687 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: E0218 19:39:43.473821 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.477508 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.477548 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.477560 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.477576 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.477589 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: E0218 19:39:43.489465 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.492821 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.492876 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.492891 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.492912 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.492925 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: E0218 19:39:43.504109 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.507253 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.507300 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.507311 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.507325 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.507335 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: E0218 19:39:43.521392 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:43 crc kubenswrapper[5007]: E0218 19:39:43.521778 5007 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.543832 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.543976 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.544046 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.544134 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.544220 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.647222 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.647496 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.647570 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.647632 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.647690 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.698670 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:43 crc kubenswrapper[5007]: E0218 19:39:43.698801 5007 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:43 crc kubenswrapper[5007]: E0218 19:39:43.698846 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs podName:7f72bbfc-96e8-4151-9291-bb3da55e7d2b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:15.698832083 +0000 UTC m=+100.480951607 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs") pod "network-metrics-daemon-96xnl" (UID: "7f72bbfc-96e8-4151-9291-bb3da55e7d2b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.750283 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.750325 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.750335 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.750351 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.750361 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.854246 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.854293 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.854304 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.854319 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.854329 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.910077 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:28:05.791483925 +0000 UTC Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.956797 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.956846 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.956860 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.956879 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:43 crc kubenswrapper[5007]: I0218 19:39:43.956895 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:43Z","lastTransitionTime":"2026-02-18T19:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.059807 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.059853 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.059865 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.059884 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.059900 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.163093 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.163147 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.163162 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.163184 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.163201 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.266365 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.266492 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.266534 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.266570 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.266597 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.369399 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.369506 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.369549 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.369578 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.369597 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.472905 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.473021 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.473048 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.473078 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.473103 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.575556 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.575603 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.575615 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.575633 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.575648 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.677740 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.677781 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.677794 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.677811 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.677824 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.780893 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.780962 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.780983 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.781008 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.781028 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.883510 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.883543 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.883552 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.883564 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.883573 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.909520 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:44 crc kubenswrapper[5007]: E0218 19:39:44.909641 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.909815 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.909809 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:44 crc kubenswrapper[5007]: E0218 19:39:44.909866 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.909860 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:44 crc kubenswrapper[5007]: E0218 19:39:44.910002 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:44 crc kubenswrapper[5007]: E0218 19:39:44.910196 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.910326 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:40:50.339666149 +0000 UTC Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.986059 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.986095 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.986107 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.986123 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:44 crc kubenswrapper[5007]: I0218 19:39:44.986136 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:44Z","lastTransitionTime":"2026-02-18T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.088841 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.088880 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.088889 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.088903 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.088915 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.190869 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.190906 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.190915 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.190959 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.190974 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.294184 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.294257 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.294271 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.294289 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.294304 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.397194 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.397235 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.397247 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.397265 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.397280 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.499895 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.499954 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.499966 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.499986 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.500007 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.601961 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.601993 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.602001 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.602014 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.602023 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.731028 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.731066 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.731074 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.731088 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.731098 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.832974 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.833013 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.833022 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.833039 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.833052 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.910658 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:30:53.881217926 +0000 UTC Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.928905 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.934908 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.934983 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.934998 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.935016 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.935116 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:45Z","lastTransitionTime":"2026-02-18T19:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.950698 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.963784 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.981317 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:45 crc kubenswrapper[5007]: I0218 19:39:45.993613 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.012638 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.029501 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.036980 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.037005 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.037015 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.037028 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.037038 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.042815 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.053138 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.067279 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.080546 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.093075 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.108951 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.121489 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.139534 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.139580 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.139593 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.139612 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.139627 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.149127 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.160448 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.173900 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.242603 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.242634 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.242646 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.242664 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.242678 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.345511 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.345547 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.345555 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.345568 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.345578 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.447546 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.447581 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.447590 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.447604 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.447615 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.550114 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.550146 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.550154 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.550167 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.550176 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.652093 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.652133 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.652148 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.652166 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.652179 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.754672 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.754723 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.754735 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.754754 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.754768 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.856877 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.856932 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.856949 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.856969 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.856983 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.909146 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.909235 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.909269 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:46 crc kubenswrapper[5007]: E0218 19:39:46.909304 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.909164 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:46 crc kubenswrapper[5007]: E0218 19:39:46.909409 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:46 crc kubenswrapper[5007]: E0218 19:39:46.909552 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:46 crc kubenswrapper[5007]: E0218 19:39:46.909648 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.910762 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:24:00.936632535 +0000 UTC Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.959388 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.959593 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.959688 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.959802 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:46 crc kubenswrapper[5007]: I0218 19:39:46.959875 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:46Z","lastTransitionTime":"2026-02-18T19:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.061734 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.061776 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.061788 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.061806 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.061817 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.163871 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.164111 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.164203 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.164304 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.164413 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.267039 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.267086 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.267095 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.267109 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.267119 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.369180 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.369246 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.369280 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.369310 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.369332 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.456975 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/0.log" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.457046 5007 generic.go:334] "Generic (PLEG): container finished" podID="9c2fc812-cdd8-415b-a138-b8a3e14fd396" containerID="0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6" exitCode=1 Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.457094 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kjbbj" event={"ID":"9c2fc812-cdd8-415b-a138-b8a3e14fd396","Type":"ContainerDied","Data":"0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.457665 5007 scope.go:117] "RemoveContainer" containerID="0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.471069 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.471103 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.471112 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.471127 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.471139 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.477154 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.490706 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.503460 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.513253 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.526146 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.539047 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.553183 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.566242 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.572930 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.572956 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.572964 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.572977 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.572986 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.577617 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.598618 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.608825 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.619509 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.631675 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.644638 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.656789 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.668878 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:47Z\\\",\\\"message\\\":\\\"2026-02-18T19:39:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a\\\\n2026-02-18T19:39:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a to /host/opt/cni/bin/\\\\n2026-02-18T19:39:02Z [verbose] multus-daemon started\\\\n2026-02-18T19:39:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:39:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.681350 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.681968 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.681998 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.682009 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.682026 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.682038 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.785042 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.785101 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.785117 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.785147 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.785160 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.888031 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.888085 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.888097 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.888114 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.888125 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.912221 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:08:17.316286496 +0000 UTC Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.990858 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.990906 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.990925 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.990948 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:47 crc kubenswrapper[5007]: I0218 19:39:47.990965 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:47Z","lastTransitionTime":"2026-02-18T19:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.093316 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.093383 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.093398 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.093442 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.093461 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.195718 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.195965 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.196108 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.196212 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.196333 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.298598 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.298630 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.298639 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.298651 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.298660 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.401489 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.401540 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.401558 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.401582 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.401597 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.467281 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/0.log" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.467366 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kjbbj" event={"ID":"9c2fc812-cdd8-415b-a138-b8a3e14fd396","Type":"ContainerStarted","Data":"b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.486761 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.501094 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.504934 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.505006 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.505024 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.505052 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.505074 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.515650 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.528791 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.541005 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.552705 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.565986 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.580703 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.594027 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.607743 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.607788 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.607801 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.607819 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.607836 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.617961 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.639201 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.658193 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.670754 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.688132 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:47Z\\\",\\\"message\\\":\\\"2026-02-18T19:39:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a\\\\n2026-02-18T19:39:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a to /host/opt/cni/bin/\\\\n2026-02-18T19:39:02Z [verbose] multus-daemon started\\\\n2026-02-18T19:39:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:39:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.697871 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.710065 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.710104 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.710116 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.710132 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.710146 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.714984 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.728117 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.813446 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.813496 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.813510 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.813528 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.813541 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.909153 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.909152 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:48 crc kubenswrapper[5007]: E0218 19:39:48.909328 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.909368 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:48 crc kubenswrapper[5007]: E0218 19:39:48.909534 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:48 crc kubenswrapper[5007]: E0218 19:39:48.909657 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.909796 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:48 crc kubenswrapper[5007]: E0218 19:39:48.909886 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.913391 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:41:24.547508151 +0000 UTC Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.916167 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.916201 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.916211 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.916227 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:48 crc kubenswrapper[5007]: I0218 19:39:48.916243 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:48Z","lastTransitionTime":"2026-02-18T19:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.019148 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.019200 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.019215 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.019233 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.019243 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.121901 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.121938 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.121947 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.121976 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.121986 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.224706 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.224742 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.224751 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.224767 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.224778 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.327413 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.327461 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.327472 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.327484 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.327494 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.429639 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.429670 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.429678 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.429689 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.429697 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.531660 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.531697 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.531705 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.531718 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.531727 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.634527 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.634597 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.634615 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.634644 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.634664 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.737797 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.737881 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.737901 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.737932 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.737952 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.841733 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.841790 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.841801 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.841825 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.841837 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.914044 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:44:46.000682946 +0000 UTC Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.943675 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.943716 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.943728 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.943749 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:49 crc kubenswrapper[5007]: I0218 19:39:49.943762 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:49Z","lastTransitionTime":"2026-02-18T19:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.046473 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.046529 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.046541 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.046558 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.046574 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.149500 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.149575 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.149597 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.149621 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.149638 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.252482 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.252538 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.252558 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.252580 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.252596 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.355834 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.355893 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.355916 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.355942 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.355961 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.458051 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.458094 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.458108 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.458122 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.458131 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.561415 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.561507 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.561526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.561552 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.561570 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.663694 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.663732 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.663742 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.663756 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.663766 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.766566 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.766646 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.766663 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.766692 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.766710 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.869374 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.869459 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.869474 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.869497 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.869511 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.908903 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.908980 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:50 crc kubenswrapper[5007]: E0218 19:39:50.909266 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.909014 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.908976 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:50 crc kubenswrapper[5007]: E0218 19:39:50.909393 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:50 crc kubenswrapper[5007]: E0218 19:39:50.909566 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:50 crc kubenswrapper[5007]: E0218 19:39:50.909751 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.914750 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:55:46.946372184 +0000 UTC Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.973523 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.973591 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.973612 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.973641 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:50 crc kubenswrapper[5007]: I0218 19:39:50.973662 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:50Z","lastTransitionTime":"2026-02-18T19:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.077054 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.077102 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.077124 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.077152 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.077173 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.180650 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.180751 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.180770 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.180794 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.180811 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.283766 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.283826 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.283843 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.283862 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.283876 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.387461 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.387522 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.387544 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.387571 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.387588 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.489641 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.489703 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.489716 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.489735 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.489749 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.592128 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.592175 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.592187 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.592204 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.592218 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.695248 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.695299 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.695311 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.695328 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.695339 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.798122 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.798175 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.798187 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.798204 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.798218 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.901059 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.901241 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.901273 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.901367 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.901396 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:51Z","lastTransitionTime":"2026-02-18T19:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:51 crc kubenswrapper[5007]: I0218 19:39:51.914842 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:18:48.332608389 +0000 UTC Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.004170 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.004249 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.004270 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.004296 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.004314 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.107382 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.107482 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.107499 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.107522 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.107535 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.210601 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.210674 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.210697 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.210728 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.210750 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.313893 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.313950 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.313967 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.313991 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.314009 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.417558 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.417609 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.417627 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.417651 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.417668 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.521068 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.521143 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.521160 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.521185 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.521204 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.624893 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.624969 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.624990 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.625021 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.625044 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.728460 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.728523 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.728543 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.728568 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.728588 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.831950 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.831997 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.832007 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.832024 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.832033 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.909045 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:52 crc kubenswrapper[5007]: E0218 19:39:52.909201 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.909482 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:52 crc kubenswrapper[5007]: E0218 19:39:52.909546 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.909623 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:52 crc kubenswrapper[5007]: E0218 19:39:52.909804 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.909889 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:52 crc kubenswrapper[5007]: E0218 19:39:52.910095 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.915156 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:17:11.643179458 +0000 UTC Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.935049 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.935109 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.935121 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.935139 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:52 crc kubenswrapper[5007]: I0218 19:39:52.935152 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:52Z","lastTransitionTime":"2026-02-18T19:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.038290 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.038340 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.038357 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.038381 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.038399 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.140232 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.140268 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.140281 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.140294 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.140304 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.242905 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.242952 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.242961 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.242977 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.242987 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.346341 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.346399 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.346448 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.346494 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.346514 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.449014 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.449064 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.449083 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.449106 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.449124 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.552531 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.552613 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.552635 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.552658 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.552674 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.655191 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.655250 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.655263 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.655283 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.655296 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.757405 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.757464 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.757473 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.757487 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.757497 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.819675 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.819751 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.819777 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.819805 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.819823 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: E0218 19:39:53.846852 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.852282 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.852352 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.852371 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.852397 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.852416 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: E0218 19:39:53.874157 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.880078 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.880134 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.880153 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.880176 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.880194 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: E0218 19:39:53.896761 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.902072 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.902145 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.902163 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.902189 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.902208 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.915394 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:59:52.04658069 +0000 UTC Feb 18 19:39:53 crc kubenswrapper[5007]: E0218 19:39:53.923956 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.928570 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.928610 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.928623 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.928641 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.928655 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:53 crc kubenswrapper[5007]: E0218 19:39:53.946104 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1f6dd9a0-63c4-4274-b1e6-2404360e9b80\\\",\\\"systemUUID\\\":\\\"7a8712a4-988b-42cb-9846-bbae317fd720\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:53 crc kubenswrapper[5007]: E0218 19:39:53.946269 5007 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.947786 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.947842 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.947860 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.947882 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:53 crc kubenswrapper[5007]: I0218 19:39:53.947898 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:53Z","lastTransitionTime":"2026-02-18T19:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.050743 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.050792 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.050806 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.050829 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.050845 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.153751 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.153807 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.153819 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.153839 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.153858 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.256543 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.256586 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.256596 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.256612 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.256624 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.359709 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.359783 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.359796 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.359816 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.359825 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.462351 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.462392 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.462406 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.462421 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.462451 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.565705 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.565762 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.565777 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.565794 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.565807 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.669071 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.669103 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.669111 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.669124 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.669133 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.771526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.771587 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.771597 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.771619 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.771634 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.873967 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.874019 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.874036 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.874055 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.874068 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.909498 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.909582 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.909607 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.909513 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:54 crc kubenswrapper[5007]: E0218 19:39:54.909717 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:54 crc kubenswrapper[5007]: E0218 19:39:54.909823 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:54 crc kubenswrapper[5007]: E0218 19:39:54.909890 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:54 crc kubenswrapper[5007]: E0218 19:39:54.910021 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.915886 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:17:38.409690404 +0000 UTC Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.976975 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.977048 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.977061 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.977078 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:54 crc kubenswrapper[5007]: I0218 19:39:54.977090 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:54Z","lastTransitionTime":"2026-02-18T19:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.080023 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.080066 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.080081 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.080101 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.080116 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.183471 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.183515 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.183526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.183543 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.183554 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.286244 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.286288 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.286296 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.286310 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.286319 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.389717 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.389823 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.389841 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.389859 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.389895 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.493381 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.493466 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.493485 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.493510 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.493524 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.596710 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.596791 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.596810 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.596840 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.596859 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.699981 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.700021 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.700038 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.700057 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.700093 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.802645 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.802676 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.802700 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.802742 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.802752 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.905086 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.905164 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.905184 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.905212 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.905232 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:55Z","lastTransitionTime":"2026-02-18T19:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.910221 5007 scope.go:117] "RemoveContainer" containerID="e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.916156 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:52:09.839877701 +0000 UTC Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.924152 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.938836 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.951690 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.973209 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:55 crc kubenswrapper[5007]: I0218 19:39:55.992977 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.008613 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.008824 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.008861 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.009277 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.009302 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:56Z","lastTransitionTime":"2026-02-18T19:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.013586 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.036973 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.055362 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.066259 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.079670 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.097371 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.110403 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.114999 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.115073 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.115090 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.115114 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.115149 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:56Z","lastTransitionTime":"2026-02-18T19:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.126127 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.138909 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.154217 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:47Z\\\",\\\"message\\\":\\\"2026-02-18T19:39:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a\\\\n2026-02-18T19:39:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a to /host/opt/cni/bin/\\\\n2026-02-18T19:39:02Z [verbose] multus-daemon started\\\\n2026-02-18T19:39:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:39:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.165188 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.177223 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.219590 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.219628 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.219642 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.219659 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.219668 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:56Z","lastTransitionTime":"2026-02-18T19:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.321926 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.321977 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.321989 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.322010 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.322022 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:56Z","lastTransitionTime":"2026-02-18T19:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.424395 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.424472 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.424487 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.424507 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.424522 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:56Z","lastTransitionTime":"2026-02-18T19:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.495168 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/2.log" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.498352 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.499146 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.517686 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.527671 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.527766 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.527817 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.527849 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.527868 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:56Z","lastTransitionTime":"2026-02-18T19:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.531412 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.550286 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.567994 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.582020 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.598709 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.615744 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.630731 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.630773 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.630787 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.630801 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.630811 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:56Z","lastTransitionTime":"2026-02-18T19:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.641257 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.652203 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.662682 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.678406 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.693018 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.711965 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.725791 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.733447 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.733505 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.733515 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.733532 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.733544 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:56Z","lastTransitionTime":"2026-02-18T19:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.739193 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:47Z\\\",\\\"message\\\":\\\"2026-02-18T19:39:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a\\\\n2026-02-18T19:39:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a to /host/opt/cni/bin/\\\\n2026-02-18T19:39:02Z [verbose] multus-daemon started\\\\n2026-02-18T19:39:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:39:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.754679 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:56 crc kubenswrapper[5007]: I0218 19:39:56.767758 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.120840 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:32:19.234831367 +0000 UTC Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.121446 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:57 crc kubenswrapper[5007]: E0218 19:39:57.121582 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.121803 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.121797 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:57 crc kubenswrapper[5007]: E0218 19:39:57.121865 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.121936 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:57 crc kubenswrapper[5007]: E0218 19:39:57.122065 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:57 crc kubenswrapper[5007]: E0218 19:39:57.122174 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.122856 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.122884 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.122895 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.122912 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.122924 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:57Z","lastTransitionTime":"2026-02-18T19:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.225564 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.225597 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.225604 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.225616 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.225625 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:57Z","lastTransitionTime":"2026-02-18T19:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.329414 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.329470 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.329483 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.329500 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.329515 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:57Z","lastTransitionTime":"2026-02-18T19:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.971729 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.971781 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.971793 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.971996 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.972007 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:57Z","lastTransitionTime":"2026-02-18T19:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.978701 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/3.log" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.980069 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/2.log" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.983445 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" exitCode=1 Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.983458 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.983532 5007 scope.go:117] "RemoveContainer" containerID="e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72" Feb 18 19:39:57 crc kubenswrapper[5007]: I0218 19:39:57.984491 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:39:57 crc kubenswrapper[5007]: E0218 19:39:57.987186 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.003991 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.019757 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.034886 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.048271 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.064194 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.074630 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.074664 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.074675 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.074693 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.074705 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:58Z","lastTransitionTime":"2026-02-18T19:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.081759 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.094720 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.121068 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:33:15.845810996 +0000 UTC Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.177674 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.177712 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.177752 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.177775 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.177792 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:58Z","lastTransitionTime":"2026-02-18T19:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.280194 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.280237 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.280248 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.280267 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.280277 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:58Z","lastTransitionTime":"2026-02-18T19:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.386842 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.407288 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64d13644d07dbedd4a896999e18e65829ca4595956c4f6c7660cac1bd287e72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:29Z\\\",\\\"message\\\":\\\"to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:28Z is after 2025-08-24T17:21:41Z]\\\\nI0218 19:39:29.074508 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0218 19:39:29.074510 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074515 6650 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}\\\\nI0218 19:39:29.074525 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0218 19:39:29.074532 6650 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0218 19:39:29.074464 6650 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0218 19:39:29.074543 6650 ob\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:57Z\\\",\\\"message\\\":\\\" 7058 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:39:57.212662 7058 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:39:57.212678 7058 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:39:57.212687 7058 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:39:57.212694 7058 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:39:57.212702 7058 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:39:57.212709 7058 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:39:57.212742 7058 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:57.212866 7058 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:57.212141 7058 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:57.213337 7058 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:57.213544 7058 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.419566 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.431727 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.446391 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.464152 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.484224 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.500045 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.516637 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:47Z\\\",\\\"message\\\":\\\"2026-02-18T19:39:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a\\\\n2026-02-18T19:39:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a to /host/opt/cni/bin/\\\\n2026-02-18T19:39:02Z [verbose] multus-daemon started\\\\n2026-02-18T19:39:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:39:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.530148 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.808606 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.808686 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.808700 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.808721 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.808735 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:58Z","lastTransitionTime":"2026-02-18T19:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.909087 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:39:58 crc kubenswrapper[5007]: E0218 19:39:58.909494 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.909163 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:39:58 crc kubenswrapper[5007]: E0218 19:39:58.909822 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.909083 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.909219 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:39:58 crc kubenswrapper[5007]: E0218 19:39:58.909900 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:39:58 crc kubenswrapper[5007]: E0218 19:39:58.910068 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.910967 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.910996 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.911005 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.911018 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.911026 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:58Z","lastTransitionTime":"2026-02-18T19:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.987280 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/3.log" Feb 18 19:39:58 crc kubenswrapper[5007]: I0218 19:39:58.990322 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:39:58 crc kubenswrapper[5007]: E0218 19:39:58.990658 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.004122 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64b41f4b-b59d-43c9-8652-dddc22ed4685\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a57922f53c5bf90dde0459b1189f4d90148d6177214a04128cf84cc186d70da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0c4de52209a8d5ee711f4193809ce26790dc637b532d2a07ad59c2e00919b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c72dca0df7b769c499dbd79e514d16d12381ca573772be96732cf157ef2af76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.012719 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.012749 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.012760 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.012774 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.012784 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.017569 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52555a16-10cb-4142-a2ca-b0a2fcc76e4e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:38:49Z\\\",\\\"message\\\":\\\"W0218 19:38:39.083664 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:38:39.083991 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771443519 cert, and key in /tmp/serving-cert-2421708602/serving-signer.crt, /tmp/serving-cert-2421708602/serving-signer.key\\\\nI0218 19:38:39.384410 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:38:39.389004 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:38:39.389188 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:38:39.391094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2421708602/tls.crt::/tmp/serving-cert-2421708602/tls.key\\\\\\\"\\\\nF0218 19:38:49.889676 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.029022 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.040505 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kjbbj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2fc812-cdd8-415b-a138-b8a3e14fd396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:47Z\\\",\\\"message\\\":\\\"2026-02-18T19:39:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a\\\\n2026-02-18T19:39:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a407854b-76fb-470b-9b8b-72274f74342a to /host/opt/cni/bin/\\\\n2026-02-18T19:39:02Z [verbose] multus-daemon started\\\\n2026-02-18T19:39:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:39:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r687l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kjbbj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.051591 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4c08908-559a-4963-869b-17a19622586a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f64ee76bed5459fd743336f19dd5b8471c3350b83aa355f9652e54a42381fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1fc95c796ea9a408021c4ac7ce59a075ed8ced0b99d25f0b68c0ebaec75d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6sw9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9bf5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.062753 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.074079 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebe1be452bf574262a7f2501300bf39feba66ad92d744ef158abaed0db506686\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a5f0b13682deaf95170e9074f4751804cd23ed92bf91a9242a3e1cbba38e7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.083165 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgdpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10ebe7a1-443b-44bf-ba10-9f79a766a932\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7689a6723f486a91624232cacdd3b135e64d0978bc3ca6bc9cf30842be0eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gq5dt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgdpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.093302 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1f79950-bc88-4358-844e-ba7f87d1b564\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d33a52392b1b612dc15062c205e2cb4606e0b3da15b9989434fbf3798f5ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d56pg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d8p88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.104320 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.114761 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.114913 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.115038 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.115123 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.115197 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.117182 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db715e4-a72b-4f74-b2e9-9d529e2b0a47\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13065ba3ce99a1040519f6468c9dea8a280684ba0413463310f39e5db161dd29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63032235ec03df2a3ecd211ba13e5fbdf7a89b0eb58bfc7eefb9c28f8a26fca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d36a4441c63bc1a98b5ee994292ef356d67e52c5c84008c81df8949eb1602ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce5adbcbb5a02b12bc28471c3194df84f028ffd5cfbd4b852dba26ec30165ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://165a6884c67d5fc5a68ab665796ada1564950d83dac8e3392ad921e2fab3bb5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://420e68138acaa37330cbedb820a9ce5c70be48a55706df7d54b702c208c47672\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba8a43bb97e22762af6c2ba1ba373e05c7852cdd5a9fd64decfc5d68c0ab36b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xqkh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hhb7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.121305 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:30:52.03611791 +0000 UTC Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.130005 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2ff6744-1459-4f93-b17a-fad3fb9c81b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9522256c072cc94aa55ebea7b1f2a511ce1323436fce95fce8fc6936f7ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eaa5ac7b67de7ca34ad6deb725dbea9608c0e54080ef4ffe2a1b8b3029076e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee769cbe77558e76590c8e0f61d5422bcaa8c99c8f59c0761f4eeff0847f1f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43f494bba595280a6196376f9a0dca6caee1c9d56f30a02e63b2c3ce4e6d995\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:38:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:38:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.140909 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e3ebbfe65ecf0562c1edfc791404bfd1d03c3ba30eb4a566a8e1a6d11191cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.170047 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3b9770-e878-4070-978e-29931dd72c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:39:57Z\\\",\\\"message\\\":\\\" 7058 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:39:57.212662 7058 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:39:57.212678 7058 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:39:57.212687 7058 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:39:57.212694 7058 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:39:57.212702 7058 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:39:57.212709 7058 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:39:57.212742 7058 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:39:57.212866 7058 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:57.212141 7058 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:39:57.213337 7058 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:39:57.213544 7058 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:39:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89t88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-75flr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.183715 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w77zw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0118dcb7-4b95-4644-8702-62d7c92f9001\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c67f7f39bca82ce84bb22ad41964ce412f08c567676c63e71a195f86b85fe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:39:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wk298\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:38:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w77zw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.203810 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-96xnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:39:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:39:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-96xnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.216701 5007 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:38:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e80b94a97a9f644d243cb9f778811ff9e9bcdd5df69793556180f2814cd0b58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:38:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:39:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.218315 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.218367 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.218382 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.218399 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.218412 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.321224 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.321261 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.321273 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.321289 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.321301 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.424574 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.424621 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.424634 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.424659 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.424671 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.526841 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.526877 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.526886 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.526900 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.526910 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.629914 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.629968 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.629977 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.629995 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.630004 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.732918 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.732975 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.732985 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.733001 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.733012 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.836775 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.836833 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.836872 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.836897 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.836909 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:39:59Z","lastTransitionTime":"2026-02-18T19:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:39:59 crc kubenswrapper[5007]: I0218 19:39:59.890992 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:39:59 crc kubenswrapper[5007]: E0218 19:39:59.891204 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.891187391 +0000 UTC m=+148.673306915 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.196997 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:17:07.953325753 +0000 UTC Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.197526 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.197552 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.197579 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.197621 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197721 5007 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197764 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.197752488 +0000 UTC m=+148.979872012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197823 5007 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197842 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.19783619 +0000 UTC m=+148.979955704 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197890 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197900 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197911 5007 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197931 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.197925493 +0000 UTC m=+148.980045017 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197971 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197979 5007 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.197986 5007 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:40:00 crc kubenswrapper[5007]: E0218 19:40:00.198002 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.197997755 +0000 UTC m=+148.980117279 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.198823 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.198855 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.198864 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.198877 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:00 crc kubenswrapper[5007]: I0218 19:40:00.198887 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:00Z","lastTransitionTime":"2026-02-18T19:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.110608 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.110654 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.110702 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:01 crc kubenswrapper[5007]: E0218 19:40:01.110850 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.111120 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:01 crc kubenswrapper[5007]: E0218 19:40:01.111197 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:01 crc kubenswrapper[5007]: E0218 19:40:01.111469 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:01 crc kubenswrapper[5007]: E0218 19:40:01.111664 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.153288 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.156021 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.156066 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.156079 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.156104 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.156118 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.197906 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:26:09.183175987 +0000 UTC Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.258409 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.258479 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.258490 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.258508 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.258523 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.361191 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.361269 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.361301 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.361333 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.361356 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.463872 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.463911 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.463946 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.463963 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.463978 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.566500 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.566544 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.566555 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.566570 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.566583 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.671029 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.671079 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.671091 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.671113 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.671130 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.774627 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.774674 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.774683 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.774700 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.774712 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.877103 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.877135 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.877144 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.877156 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.877166 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.980842 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.980915 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.980943 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.980976 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:01 crc kubenswrapper[5007]: I0218 19:40:01.980999 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:01Z","lastTransitionTime":"2026-02-18T19:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.088982 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.089392 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.089585 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.089736 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.089878 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.193392 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.193470 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.193483 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.193497 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.193510 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.198837 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:46:26.944778792 +0000 UTC Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.296413 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.296506 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.296526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.296555 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.296573 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.399839 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.399910 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.399935 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.399964 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.399985 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.502709 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.502972 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.503059 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.503290 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.503389 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.606111 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.606443 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.606564 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.606663 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.606752 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.710673 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.711273 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.711551 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.711643 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.711725 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.815222 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.815506 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.815572 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.815634 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.815707 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.908721 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.909305 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:02 crc kubenswrapper[5007]: E0218 19:40:02.909416 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.909180 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.909203 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:02 crc kubenswrapper[5007]: E0218 19:40:02.909978 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:02 crc kubenswrapper[5007]: E0218 19:40:02.910049 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:02 crc kubenswrapper[5007]: E0218 19:40:02.910117 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.918176 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.918196 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.918204 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.918216 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:02 crc kubenswrapper[5007]: I0218 19:40:02.918227 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:02Z","lastTransitionTime":"2026-02-18T19:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.020414 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.020489 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.020505 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.020525 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.020539 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.123669 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.124024 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.124228 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.124474 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.124694 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.199180 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:07:46.532616972 +0000 UTC Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.228027 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.228135 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.228158 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.228239 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.228327 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.331264 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.331319 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.331335 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.331359 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.331376 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.434224 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.434295 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.434316 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.434343 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.434363 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.537420 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.537502 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.537513 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.537526 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.537536 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.640741 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.640825 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.640846 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.640880 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.640906 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.744060 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.744116 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.744136 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.744162 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.744176 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.846898 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.846975 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.847000 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.847031 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.847055 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.949997 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.950067 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.950086 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.950115 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:03 crc kubenswrapper[5007]: I0218 19:40:03.950141 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:03Z","lastTransitionTime":"2026-02-18T19:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.054111 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.054171 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.054190 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.054217 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.054236 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:04Z","lastTransitionTime":"2026-02-18T19:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.103610 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.103684 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.103717 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.103755 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.103779 5007 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:40:04Z","lastTransitionTime":"2026-02-18T19:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.201114 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:20:59.977226788 +0000 UTC Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.201200 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.210355 5007 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.222046 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l"] Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.222701 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.225463 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.226067 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.226130 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.226130 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.250997 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb8394d-1208-4a9f-b1f0-61f05ee85675-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.251193 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/deb8394d-1208-4a9f-b1f0-61f05ee85675-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.251341 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/deb8394d-1208-4a9f-b1f0-61f05ee85675-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.251388 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/deb8394d-1208-4a9f-b1f0-61f05ee85675-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.251503 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/deb8394d-1208-4a9f-b1f0-61f05ee85675-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.260501 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podStartSLOduration=67.260476846 podStartE2EDuration="1m7.260476846s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.240480004 +0000 UTC m=+89.022599598" watchObservedRunningTime="2026-02-18 19:40:04.260476846 +0000 UTC m=+89.042596410" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.329629 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bgdpv" podStartSLOduration=67.329594549 podStartE2EDuration="1m7.329594549s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.328551519 +0000 UTC m=+89.110671033" watchObservedRunningTime="2026-02-18 19:40:04.329594549 +0000 UTC m=+89.111714113" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.352334 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb8394d-1208-4a9f-b1f0-61f05ee85675-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.352388 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/deb8394d-1208-4a9f-b1f0-61f05ee85675-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.352453 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/deb8394d-1208-4a9f-b1f0-61f05ee85675-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.352478 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/deb8394d-1208-4a9f-b1f0-61f05ee85675-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.352511 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/deb8394d-1208-4a9f-b1f0-61f05ee85675-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.352583 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/deb8394d-1208-4a9f-b1f0-61f05ee85675-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.352708 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/deb8394d-1208-4a9f-b1f0-61f05ee85675-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.353457 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/deb8394d-1208-4a9f-b1f0-61f05ee85675-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.358687 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deb8394d-1208-4a9f-b1f0-61f05ee85675-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.373125 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.373096581 podStartE2EDuration="38.373096581s" podCreationTimestamp="2026-02-18 19:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.34777726 +0000 UTC m=+89.129896784" watchObservedRunningTime="2026-02-18 19:40:04.373096581 +0000 UTC m=+89.155216145" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.373582 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hhb7p" podStartSLOduration=67.373572815 podStartE2EDuration="1m7.373572815s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.372154735 +0000 UTC m=+89.154274299" watchObservedRunningTime="2026-02-18 19:40:04.373572815 +0000 UTC m=+89.155692369" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.380331 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/deb8394d-1208-4a9f-b1f0-61f05ee85675-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xrg9l\" (UID: \"deb8394d-1208-4a9f-b1f0-61f05ee85675\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.489844 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-w77zw" podStartSLOduration=67.489825232 podStartE2EDuration="1m7.489825232s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.472265489 +0000 UTC m=+89.254385013" watchObservedRunningTime="2026-02-18 19:40:04.489825232 +0000 UTC m=+89.271944756" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.490547 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kjbbj" podStartSLOduration=67.490540982 podStartE2EDuration="1m7.490540982s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.48867417 +0000 UTC m=+89.270793714" watchObservedRunningTime="2026-02-18 19:40:04.490540982 +0000 UTC m=+89.272660506" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.504517 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9bf5j" podStartSLOduration=67.504494795 podStartE2EDuration="1m7.504494795s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.503923909 +0000 UTC m=+89.286043433" watchObservedRunningTime="2026-02-18 19:40:04.504494795 +0000 UTC m=+89.286614319" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.517894 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.517877481 podStartE2EDuration="5.517877481s" podCreationTimestamp="2026-02-18 19:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.517478189 +0000 UTC m=+89.299597733" watchObservedRunningTime="2026-02-18 19:40:04.517877481 +0000 UTC m=+89.299997005" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.537273 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.539120 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=63.539107868 podStartE2EDuration="1m3.539107868s" podCreationTimestamp="2026-02-18 19:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.537962315 +0000 UTC m=+89.320081859" watchObservedRunningTime="2026-02-18 19:40:04.539107868 +0000 UTC m=+89.321227382" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.581616 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.581589802 podStartE2EDuration="1m10.581589802s" podCreationTimestamp="2026-02-18 19:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:04.56339208 +0000 UTC m=+89.345511604" watchObservedRunningTime="2026-02-18 19:40:04.581589802 +0000 UTC m=+89.363709326" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.909256 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.909326 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:04 crc kubenswrapper[5007]: E0218 19:40:04.909358 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.909463 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:04 crc kubenswrapper[5007]: E0218 19:40:04.909593 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:04 crc kubenswrapper[5007]: E0218 19:40:04.909669 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:04 crc kubenswrapper[5007]: I0218 19:40:04.909691 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:04 crc kubenswrapper[5007]: E0218 19:40:04.909815 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:05 crc kubenswrapper[5007]: I0218 19:40:05.219787 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" event={"ID":"deb8394d-1208-4a9f-b1f0-61f05ee85675","Type":"ContainerStarted","Data":"3a3ff69d8867e16e6fdabf802ce63aff1e030bfa37267c2937e6712a3f03f0c0"} Feb 18 19:40:05 crc kubenswrapper[5007]: I0218 19:40:05.219838 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" event={"ID":"deb8394d-1208-4a9f-b1f0-61f05ee85675","Type":"ContainerStarted","Data":"c1296321551610c03e10fe359822ef8968718cf362b9e449176b3ffee0ef1bb2"} Feb 18 19:40:05 crc kubenswrapper[5007]: I0218 19:40:05.249301 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xrg9l" podStartSLOduration=68.249279977 podStartE2EDuration="1m8.249279977s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:05.247085565 +0000 UTC m=+90.029205099" watchObservedRunningTime="2026-02-18 19:40:05.249279977 +0000 UTC m=+90.031399501" Feb 18 19:40:06 crc kubenswrapper[5007]: I0218 19:40:06.909872 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:06 crc kubenswrapper[5007]: I0218 19:40:06.909959 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:06 crc kubenswrapper[5007]: I0218 19:40:06.909959 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:06 crc kubenswrapper[5007]: I0218 19:40:06.910169 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:06 crc kubenswrapper[5007]: E0218 19:40:06.910296 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:06 crc kubenswrapper[5007]: E0218 19:40:06.910375 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:06 crc kubenswrapper[5007]: E0218 19:40:06.910498 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:06 crc kubenswrapper[5007]: E0218 19:40:06.910539 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:08 crc kubenswrapper[5007]: I0218 19:40:08.908818 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:08 crc kubenswrapper[5007]: I0218 19:40:08.908948 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:08 crc kubenswrapper[5007]: I0218 19:40:08.908963 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:08 crc kubenswrapper[5007]: I0218 19:40:08.908983 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:08 crc kubenswrapper[5007]: E0218 19:40:08.909076 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:08 crc kubenswrapper[5007]: E0218 19:40:08.909187 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:08 crc kubenswrapper[5007]: E0218 19:40:08.909322 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:08 crc kubenswrapper[5007]: E0218 19:40:08.909422 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:10 crc kubenswrapper[5007]: I0218 19:40:10.909396 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:10 crc kubenswrapper[5007]: I0218 19:40:10.909479 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:10 crc kubenswrapper[5007]: I0218 19:40:10.909406 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:10 crc kubenswrapper[5007]: E0218 19:40:10.909594 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:10 crc kubenswrapper[5007]: E0218 19:40:10.909662 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:10 crc kubenswrapper[5007]: E0218 19:40:10.909724 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:10 crc kubenswrapper[5007]: I0218 19:40:10.909755 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:10 crc kubenswrapper[5007]: E0218 19:40:10.909973 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:11 crc kubenswrapper[5007]: I0218 19:40:11.909836 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:40:11 crc kubenswrapper[5007]: E0218 19:40:11.910391 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:40:12 crc kubenswrapper[5007]: I0218 19:40:12.909068 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:12 crc kubenswrapper[5007]: E0218 19:40:12.909293 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:12 crc kubenswrapper[5007]: I0218 19:40:12.909648 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:12 crc kubenswrapper[5007]: I0218 19:40:12.909663 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:12 crc kubenswrapper[5007]: I0218 19:40:12.909796 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:12 crc kubenswrapper[5007]: E0218 19:40:12.909937 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:12 crc kubenswrapper[5007]: E0218 19:40:12.910046 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:12 crc kubenswrapper[5007]: E0218 19:40:12.910505 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:13 crc kubenswrapper[5007]: I0218 19:40:13.935171 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 19:40:14 crc kubenswrapper[5007]: I0218 19:40:14.909059 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:14 crc kubenswrapper[5007]: I0218 19:40:14.909089 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:14 crc kubenswrapper[5007]: I0218 19:40:14.909177 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:14 crc kubenswrapper[5007]: I0218 19:40:14.909331 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:14 crc kubenswrapper[5007]: E0218 19:40:14.909521 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:14 crc kubenswrapper[5007]: E0218 19:40:14.909677 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:14 crc kubenswrapper[5007]: E0218 19:40:14.909925 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:14 crc kubenswrapper[5007]: E0218 19:40:14.909939 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:15 crc kubenswrapper[5007]: I0218 19:40:15.780986 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:15 crc kubenswrapper[5007]: E0218 19:40:15.781206 5007 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:40:15 crc kubenswrapper[5007]: E0218 19:40:15.781322 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs podName:7f72bbfc-96e8-4151-9291-bb3da55e7d2b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:19.781291204 +0000 UTC m=+164.563410758 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs") pod "network-metrics-daemon-96xnl" (UID: "7f72bbfc-96e8-4151-9291-bb3da55e7d2b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:40:16 crc kubenswrapper[5007]: I0218 19:40:16.909511 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:16 crc kubenswrapper[5007]: I0218 19:40:16.909557 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:16 crc kubenswrapper[5007]: I0218 19:40:16.909537 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:16 crc kubenswrapper[5007]: E0218 19:40:16.909688 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:16 crc kubenswrapper[5007]: I0218 19:40:16.909744 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:16 crc kubenswrapper[5007]: E0218 19:40:16.909892 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:16 crc kubenswrapper[5007]: E0218 19:40:16.909974 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:16 crc kubenswrapper[5007]: E0218 19:40:16.910043 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:18 crc kubenswrapper[5007]: I0218 19:40:18.909618 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:18 crc kubenswrapper[5007]: I0218 19:40:18.909638 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:18 crc kubenswrapper[5007]: E0218 19:40:18.910825 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:18 crc kubenswrapper[5007]: I0218 19:40:18.909755 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:18 crc kubenswrapper[5007]: I0218 19:40:18.909752 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:18 crc kubenswrapper[5007]: E0218 19:40:18.911002 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:18 crc kubenswrapper[5007]: E0218 19:40:18.911198 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:18 crc kubenswrapper[5007]: E0218 19:40:18.911281 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:20 crc kubenswrapper[5007]: I0218 19:40:20.909636 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:20 crc kubenswrapper[5007]: I0218 19:40:20.909748 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:20 crc kubenswrapper[5007]: E0218 19:40:20.909991 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:20 crc kubenswrapper[5007]: I0218 19:40:20.910537 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:20 crc kubenswrapper[5007]: E0218 19:40:20.910751 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:20 crc kubenswrapper[5007]: I0218 19:40:20.911226 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:20 crc kubenswrapper[5007]: E0218 19:40:20.911336 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:20 crc kubenswrapper[5007]: E0218 19:40:20.911375 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:22 crc kubenswrapper[5007]: I0218 19:40:22.909006 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:22 crc kubenswrapper[5007]: I0218 19:40:22.909124 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:22 crc kubenswrapper[5007]: I0218 19:40:22.909124 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:22 crc kubenswrapper[5007]: I0218 19:40:22.909206 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:22 crc kubenswrapper[5007]: E0218 19:40:22.909347 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:22 crc kubenswrapper[5007]: E0218 19:40:22.909524 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:22 crc kubenswrapper[5007]: E0218 19:40:22.909634 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:22 crc kubenswrapper[5007]: E0218 19:40:22.909727 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:24 crc kubenswrapper[5007]: I0218 19:40:24.908711 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:24 crc kubenswrapper[5007]: E0218 19:40:24.909182 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:24 crc kubenswrapper[5007]: I0218 19:40:24.909279 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:24 crc kubenswrapper[5007]: I0218 19:40:24.909575 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:24 crc kubenswrapper[5007]: I0218 19:40:24.909263 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:24 crc kubenswrapper[5007]: E0218 19:40:24.910121 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:24 crc kubenswrapper[5007]: E0218 19:40:24.910296 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:24 crc kubenswrapper[5007]: E0218 19:40:24.910529 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:24 crc kubenswrapper[5007]: I0218 19:40:24.910700 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:40:24 crc kubenswrapper[5007]: E0218 19:40:24.911002 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-75flr_openshift-ovn-kubernetes(de3b9770-e878-4070-978e-29931dd72c35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" Feb 18 19:40:26 crc kubenswrapper[5007]: I0218 19:40:26.909176 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:26 crc kubenswrapper[5007]: I0218 19:40:26.909255 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:26 crc kubenswrapper[5007]: I0218 19:40:26.909199 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:26 crc kubenswrapper[5007]: E0218 19:40:26.909413 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:26 crc kubenswrapper[5007]: E0218 19:40:26.909572 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:26 crc kubenswrapper[5007]: I0218 19:40:26.909640 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:26 crc kubenswrapper[5007]: E0218 19:40:26.909716 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:26 crc kubenswrapper[5007]: E0218 19:40:26.909784 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:28 crc kubenswrapper[5007]: I0218 19:40:28.909421 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:28 crc kubenswrapper[5007]: I0218 19:40:28.909581 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:28 crc kubenswrapper[5007]: I0218 19:40:28.909479 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:28 crc kubenswrapper[5007]: I0218 19:40:28.909630 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:28 crc kubenswrapper[5007]: E0218 19:40:28.909694 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:28 crc kubenswrapper[5007]: E0218 19:40:28.909896 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:28 crc kubenswrapper[5007]: E0218 19:40:28.910068 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:28 crc kubenswrapper[5007]: E0218 19:40:28.910175 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:30 crc kubenswrapper[5007]: I0218 19:40:30.908620 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:30 crc kubenswrapper[5007]: I0218 19:40:30.908669 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:30 crc kubenswrapper[5007]: E0218 19:40:30.908770 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:30 crc kubenswrapper[5007]: I0218 19:40:30.908620 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:30 crc kubenswrapper[5007]: I0218 19:40:30.908836 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:30 crc kubenswrapper[5007]: E0218 19:40:30.908860 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:30 crc kubenswrapper[5007]: E0218 19:40:30.908999 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:30 crc kubenswrapper[5007]: E0218 19:40:30.908916 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:32 crc kubenswrapper[5007]: I0218 19:40:32.909309 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:32 crc kubenswrapper[5007]: I0218 19:40:32.909346 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:32 crc kubenswrapper[5007]: I0218 19:40:32.909491 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:32 crc kubenswrapper[5007]: I0218 19:40:32.909576 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:32 crc kubenswrapper[5007]: E0218 19:40:32.909672 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:32 crc kubenswrapper[5007]: E0218 19:40:32.909769 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:32 crc kubenswrapper[5007]: E0218 19:40:32.909879 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:32 crc kubenswrapper[5007]: E0218 19:40:32.910046 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:33 crc kubenswrapper[5007]: I0218 19:40:33.332321 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/1.log" Feb 18 19:40:33 crc kubenswrapper[5007]: I0218 19:40:33.332716 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/0.log" Feb 18 19:40:33 crc kubenswrapper[5007]: I0218 19:40:33.332744 5007 generic.go:334] "Generic (PLEG): container finished" podID="9c2fc812-cdd8-415b-a138-b8a3e14fd396" containerID="b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9" exitCode=1 Feb 18 19:40:33 crc kubenswrapper[5007]: I0218 19:40:33.332765 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kjbbj" event={"ID":"9c2fc812-cdd8-415b-a138-b8a3e14fd396","Type":"ContainerDied","Data":"b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9"} Feb 18 19:40:33 crc kubenswrapper[5007]: I0218 19:40:33.332794 5007 scope.go:117] "RemoveContainer" containerID="0d8dfd0f28c8632b1d40d27d854a63176c278281bc3b0dad005a1657e406fba6" Feb 18 19:40:33 crc kubenswrapper[5007]: I0218 19:40:33.333134 5007 scope.go:117] "RemoveContainer" containerID="b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9" Feb 18 19:40:33 crc kubenswrapper[5007]: E0218 19:40:33.333300 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kjbbj_openshift-multus(9c2fc812-cdd8-415b-a138-b8a3e14fd396)\"" pod="openshift-multus/multus-kjbbj" podUID="9c2fc812-cdd8-415b-a138-b8a3e14fd396" Feb 18 19:40:33 crc kubenswrapper[5007]: I0218 19:40:33.351246 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=20.351226753 podStartE2EDuration="20.351226753s" podCreationTimestamp="2026-02-18 19:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:15.949544273 +0000 UTC m=+100.731663807" watchObservedRunningTime="2026-02-18 19:40:33.351226753 +0000 UTC m=+118.133346277" Feb 18 19:40:34 crc kubenswrapper[5007]: I0218 19:40:34.339271 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/1.log" Feb 18 19:40:34 crc kubenswrapper[5007]: I0218 19:40:34.909480 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:34 crc kubenswrapper[5007]: I0218 19:40:34.909588 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:34 crc kubenswrapper[5007]: I0218 19:40:34.909700 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:34 crc kubenswrapper[5007]: E0218 19:40:34.909693 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:34 crc kubenswrapper[5007]: I0218 19:40:34.909750 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:34 crc kubenswrapper[5007]: E0218 19:40:34.909875 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:34 crc kubenswrapper[5007]: E0218 19:40:34.910364 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:34 crc kubenswrapper[5007]: E0218 19:40:34.910620 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:35 crc kubenswrapper[5007]: E0218 19:40:35.905044 5007 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 19:40:36 crc kubenswrapper[5007]: E0218 19:40:36.165304 5007 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:40:36 crc kubenswrapper[5007]: I0218 19:40:36.909153 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:36 crc kubenswrapper[5007]: I0218 19:40:36.909167 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:36 crc kubenswrapper[5007]: E0218 19:40:36.909971 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:36 crc kubenswrapper[5007]: I0218 19:40:36.909199 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:36 crc kubenswrapper[5007]: I0218 19:40:36.909178 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:36 crc kubenswrapper[5007]: E0218 19:40:36.910087 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:36 crc kubenswrapper[5007]: E0218 19:40:36.910284 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:36 crc kubenswrapper[5007]: E0218 19:40:36.910671 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:38 crc kubenswrapper[5007]: I0218 19:40:38.909566 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:38 crc kubenswrapper[5007]: E0218 19:40:38.909797 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:38 crc kubenswrapper[5007]: I0218 19:40:38.909919 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:38 crc kubenswrapper[5007]: I0218 19:40:38.909913 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:38 crc kubenswrapper[5007]: E0218 19:40:38.910139 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:38 crc kubenswrapper[5007]: E0218 19:40:38.910233 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:38 crc kubenswrapper[5007]: I0218 19:40:38.910665 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:38 crc kubenswrapper[5007]: E0218 19:40:38.910785 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:38 crc kubenswrapper[5007]: I0218 19:40:38.912806 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:40:39 crc kubenswrapper[5007]: I0218 19:40:39.364757 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/3.log" Feb 18 19:40:39 crc kubenswrapper[5007]: I0218 19:40:39.368041 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerStarted","Data":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} Feb 18 19:40:39 crc kubenswrapper[5007]: I0218 19:40:39.368721 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:40:39 crc kubenswrapper[5007]: I0218 19:40:39.406326 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podStartSLOduration=102.406276008 podStartE2EDuration="1m42.406276008s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:39.405072384 +0000 UTC m=+124.187191958" watchObservedRunningTime="2026-02-18 19:40:39.406276008 +0000 UTC m=+124.188395552" Feb 18 19:40:39 crc kubenswrapper[5007]: I0218 19:40:39.928765 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-96xnl"] Feb 18 19:40:39 crc kubenswrapper[5007]: I0218 19:40:39.929000 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:39 crc kubenswrapper[5007]: E0218 19:40:39.929226 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:40 crc kubenswrapper[5007]: I0218 19:40:40.908980 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:40 crc kubenswrapper[5007]: I0218 19:40:40.908991 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:40 crc kubenswrapper[5007]: I0218 19:40:40.909020 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:40 crc kubenswrapper[5007]: E0218 19:40:40.909527 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:40 crc kubenswrapper[5007]: E0218 19:40:40.909410 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:40 crc kubenswrapper[5007]: E0218 19:40:40.909937 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:41 crc kubenswrapper[5007]: E0218 19:40:41.166989 5007 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:40:41 crc kubenswrapper[5007]: I0218 19:40:41.908784 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:41 crc kubenswrapper[5007]: E0218 19:40:41.908982 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:42 crc kubenswrapper[5007]: I0218 19:40:42.909702 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:42 crc kubenswrapper[5007]: E0218 19:40:42.909829 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:42 crc kubenswrapper[5007]: I0218 19:40:42.909815 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:42 crc kubenswrapper[5007]: E0218 19:40:42.909913 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:42 crc kubenswrapper[5007]: I0218 19:40:42.909701 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:42 crc kubenswrapper[5007]: E0218 19:40:42.909978 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:43 crc kubenswrapper[5007]: I0218 19:40:43.909266 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:43 crc kubenswrapper[5007]: E0218 19:40:43.909502 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:44 crc kubenswrapper[5007]: I0218 19:40:44.910015 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:44 crc kubenswrapper[5007]: E0218 19:40:44.910264 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:44 crc kubenswrapper[5007]: I0218 19:40:44.910630 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:44 crc kubenswrapper[5007]: E0218 19:40:44.910777 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:44 crc kubenswrapper[5007]: I0218 19:40:44.910995 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:44 crc kubenswrapper[5007]: E0218 19:40:44.911089 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:44 crc kubenswrapper[5007]: I0218 19:40:44.917098 5007 scope.go:117] "RemoveContainer" containerID="b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9" Feb 18 19:40:45 crc kubenswrapper[5007]: I0218 19:40:45.397069 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/1.log" Feb 18 19:40:45 crc kubenswrapper[5007]: I0218 19:40:45.397156 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kjbbj" event={"ID":"9c2fc812-cdd8-415b-a138-b8a3e14fd396","Type":"ContainerStarted","Data":"e7957d56fb4dadc65bfa2816698c583bf7f0be11331ec1ec2793b1354173aec0"} Feb 18 19:40:45 crc kubenswrapper[5007]: I0218 19:40:45.909521 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:45 crc kubenswrapper[5007]: E0218 19:40:45.910943 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:46 crc kubenswrapper[5007]: E0218 19:40:46.169234 5007 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:40:46 crc kubenswrapper[5007]: I0218 19:40:46.909134 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:46 crc kubenswrapper[5007]: E0218 19:40:46.909313 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:46 crc kubenswrapper[5007]: I0218 19:40:46.909640 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:46 crc kubenswrapper[5007]: I0218 19:40:46.909760 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:46 crc kubenswrapper[5007]: E0218 19:40:46.909861 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:46 crc kubenswrapper[5007]: E0218 19:40:46.910020 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:47 crc kubenswrapper[5007]: I0218 19:40:47.130918 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:40:47 crc kubenswrapper[5007]: I0218 19:40:47.909968 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:47 crc kubenswrapper[5007]: E0218 19:40:47.910235 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:48 crc kubenswrapper[5007]: I0218 19:40:48.909541 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:48 crc kubenswrapper[5007]: I0218 19:40:48.909541 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:48 crc kubenswrapper[5007]: E0218 19:40:48.909823 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:48 crc kubenswrapper[5007]: E0218 19:40:48.909889 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:48 crc kubenswrapper[5007]: I0218 19:40:48.909569 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:48 crc kubenswrapper[5007]: E0218 19:40:48.910056 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:49 crc kubenswrapper[5007]: I0218 19:40:49.909170 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:49 crc kubenswrapper[5007]: E0218 19:40:49.911228 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-96xnl" podUID="7f72bbfc-96e8-4151-9291-bb3da55e7d2b" Feb 18 19:40:50 crc kubenswrapper[5007]: I0218 19:40:50.909135 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:50 crc kubenswrapper[5007]: E0218 19:40:50.909293 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:40:50 crc kubenswrapper[5007]: I0218 19:40:50.909357 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:50 crc kubenswrapper[5007]: I0218 19:40:50.909375 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:50 crc kubenswrapper[5007]: E0218 19:40:50.909401 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:40:50 crc kubenswrapper[5007]: E0218 19:40:50.909681 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:40:51 crc kubenswrapper[5007]: I0218 19:40:51.908874 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:40:51 crc kubenswrapper[5007]: I0218 19:40:51.912102 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:40:51 crc kubenswrapper[5007]: I0218 19:40:51.912413 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:40:52 crc kubenswrapper[5007]: I0218 19:40:52.909272 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:40:52 crc kubenswrapper[5007]: I0218 19:40:52.909285 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:40:52 crc kubenswrapper[5007]: I0218 19:40:52.909421 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:40:52 crc kubenswrapper[5007]: I0218 19:40:52.912232 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:40:52 crc kubenswrapper[5007]: I0218 19:40:52.912347 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:40:52 crc kubenswrapper[5007]: I0218 19:40:52.912255 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:40:52 crc kubenswrapper[5007]: I0218 19:40:52.912848 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.775920 5007 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.833131 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8pd9"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.837929 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.843459 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.843843 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gdbm9"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.847925 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.848569 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.849029 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.849308 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.849351 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.849643 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.849874 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.849052 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.850180 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.852272 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.863635 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.865523 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.865705 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.866979 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.867708 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.868039 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.871014 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j5hn8"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.871253 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.871321 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.873501 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.873614 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.873713 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.873805 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.874207 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.876035 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.876145 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.876277 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8r6kd"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.876572 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.876882 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.877735 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4vmwt"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.878181 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.880311 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-msmb4"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.880927 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b2xj7"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.881302 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.881717 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.882715 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.882770 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.882800 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.883320 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lb5x9"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.883595 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.883804 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.883865 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.884511 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5kxmr"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.884694 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.884865 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.885064 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.885260 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cjq4g"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.885697 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.886278 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.886734 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.887040 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.887245 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.887451 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.888285 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m68bj"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.888558 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.888685 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.888720 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.888852 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.888961 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.889693 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.889909 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.890027 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.890249 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.890262 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.890545 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.890600 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.890677 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.890774 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.891012 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.891072 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.892231 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.892857 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.893253 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.893487 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-764l7"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.893918 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.901760 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.908454 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.909053 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.910145 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.910616 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.910854 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.911338 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.912100 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tw86n"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.912604 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.913730 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.913996 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.914144 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.914621 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.914928 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.915252 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.916990 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.917022 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.917650 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.917812 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.918085 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.925181 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.925330 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.925728 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.926173 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.930627 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-config\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.930674 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.930709 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.930731 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.941652 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.941917 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.942646 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.943038 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.945186 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.948563 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.949169 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.949261 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.949379 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.949508 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.949931 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.950168 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.950252 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.930750 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-etcd-client\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951567 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951638 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-serving-cert\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951670 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-image-import-ca\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951719 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v864z\" (UniqueName: \"kubernetes.io/projected/cee7777d-8cb6-4c90-a650-4da91faaf021-kube-api-access-v864z\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951751 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-config\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951809 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cee7777d-8cb6-4c90-a650-4da91faaf021-node-pullsecrets\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951844 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-config\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951874 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-client-ca\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951927 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-encryption-config\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.951967 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cee7777d-8cb6-4c90-a650-4da91faaf021-audit-dir\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.952005 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-audit\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.952047 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-serving-cert\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.952107 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9l4n\" (UniqueName: \"kubernetes.io/projected/02fc3cf7-03bd-4940-956b-51c16a41c310-kube-api-access-m9l4n\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.952167 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8qb2\" (UniqueName: \"kubernetes.io/projected/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-kube-api-access-c8qb2\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.952207 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02fc3cf7-03bd-4940-956b-51c16a41c310-serving-cert\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.952240 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-client-ca\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.952758 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.953388 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.953653 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.954471 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.954939 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956093 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956230 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956261 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956474 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956590 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956633 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956700 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956771 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956898 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.956957 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.957138 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.957201 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.957145 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.957381 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.957538 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.958090 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.958281 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.958488 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.958685 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.958876 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.959029 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.959123 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.959193 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.959267 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.959340 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.962377 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.963154 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.963622 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.964562 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.965129 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.965347 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.966876 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.967201 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.967418 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.968253 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.968697 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kfmgt"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.969606 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.970894 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.978016 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfxfg"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.979295 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.980964 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.983456 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.991445 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.991725 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.992778 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.993530 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.993688 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.994361 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.995977 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.997070 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn"] Feb 18 19:40:54 crc kubenswrapper[5007]: I0218 19:40:54.997933 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.022953 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.025743 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.028613 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-82445"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.029030 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.029054 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xms9g"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.029343 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.029733 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.029956 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.030097 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.030254 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.030941 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.031178 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gdbm9"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.032296 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.034717 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.038310 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.038460 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z6vfc"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.039242 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z6vfc" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.042663 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5kxmr"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.043764 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.044729 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.048447 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062470 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v864z\" (UniqueName: \"kubernetes.io/projected/cee7777d-8cb6-4c90-a650-4da91faaf021-kube-api-access-v864z\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062509 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9b42\" (UniqueName: \"kubernetes.io/projected/78ae9445-87b0-4e87-a46c-d2a0146d2051-kube-api-access-h9b42\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062537 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251e8541-289e-4085-9d90-5621ea85ca04-config\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062555 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9e559f8-c226-49c4-9354-68241ac7fa62-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062571 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8937e868-388c-45cd-9260-972f5fd599e7-serving-cert\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062587 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062603 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-encryption-config\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062623 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-config\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062620 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b2xj7"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062643 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6md2\" (UniqueName: \"kubernetes.io/projected/251e8541-289e-4085-9d90-5621ea85ca04-kube-api-access-j6md2\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062665 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-audit-policies\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062685 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/394f7cfe-7b2d-40d0-a780-fbbe5720101b-auth-proxy-config\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062700 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cee7777d-8cb6-4c90-a650-4da91faaf021-node-pullsecrets\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062730 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-config\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062747 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062798 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cee7777d-8cb6-4c90-a650-4da91faaf021-node-pullsecrets\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062856 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwkf2\" (UniqueName: \"kubernetes.io/projected/9105100f-dce8-42a8-b182-945de577952d-kube-api-access-gwkf2\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062896 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/394f7cfe-7b2d-40d0-a780-fbbe5720101b-machine-approver-tls\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062964 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f99085de-37df-4176-94af-d62782e53e73-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.062989 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45ms\" (UniqueName: \"kubernetes.io/projected/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-kube-api-access-m45ms\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063034 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-default-certificate\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063059 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d2ded945-4867-4a9b-bc0d-0643f04decbe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063103 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063132 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-client-ca\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063177 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-audit-policies\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063202 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-config\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063230 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063276 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x96sp\" (UniqueName: \"kubernetes.io/projected/3228209f-261f-431c-8552-a83291d93d52-kube-api-access-x96sp\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063298 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ae9445-87b0-4e87-a46c-d2a0146d2051-service-ca-bundle\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063343 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndqf\" (UniqueName: \"kubernetes.io/projected/d9e559f8-c226-49c4-9354-68241ac7fa62-kube-api-access-tndqf\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063373 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dbbd3e2-4581-4ca2-9180-3dd62529478e-serving-cert\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063417 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-encryption-config\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063476 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cee7777d-8cb6-4c90-a650-4da91faaf021-audit-dir\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063599 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cee7777d-8cb6-4c90-a650-4da91faaf021-audit-dir\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063616 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2745246a-5a9d-4f6f-95b1-d4d473571db8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s792g\" (UID: \"2745246a-5a9d-4f6f-95b1-d4d473571db8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063661 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063691 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7f3068-4f27-4e7c-b458-0fa999acbe60-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063730 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-config\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063718 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8mn\" (UniqueName: \"kubernetes.io/projected/394f7cfe-7b2d-40d0-a780-fbbe5720101b-kube-api-access-fg8mn\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063782 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e559f8-c226-49c4-9354-68241ac7fa62-images\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063805 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wtls\" (UniqueName: \"kubernetes.io/projected/d23074ca-8045-4137-aa07-78b59df3dbc9-kube-api-access-8wtls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063831 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztdg7\" (UniqueName: \"kubernetes.io/projected/b6d0e889-a325-491c-bfda-b5d2090f4f97-kube-api-access-ztdg7\") pod \"dns-operator-744455d44c-msmb4\" (UID: \"b6d0e889-a325-491c-bfda-b5d2090f4f97\") " pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063874 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-audit\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063899 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9105100f-dce8-42a8-b182-945de577952d-audit-dir\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063948 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-stats-auth\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063971 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7f3068-4f27-4e7c-b458-0fa999acbe60-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.063995 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/251e8541-289e-4085-9d90-5621ea85ca04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064031 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctn5\" (UniqueName: \"kubernetes.io/projected/0c0f693c-c46e-4c30-af8b-f962c3362e2a-kube-api-access-qctn5\") pod \"cluster-samples-operator-665b6dd947-r28hj\" (UID: \"0c0f693c-c46e-4c30-af8b-f962c3362e2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064115 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064118 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-serving-cert\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064167 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-config\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064179 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064205 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d23074ca-8045-4137-aa07-78b59df3dbc9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064232 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7r5g\" (UniqueName: \"kubernetes.io/projected/5dbbd3e2-4581-4ca2-9180-3dd62529478e-kube-api-access-f7r5g\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064257 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064414 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-client-ca\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064636 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c0f693c-c46e-4c30-af8b-f962c3362e2a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r28hj\" (UID: \"0c0f693c-c46e-4c30-af8b-f962c3362e2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064676 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-metrics-certs\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064691 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-audit\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064700 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-audit-dir\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064731 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98tmd\" (UniqueName: \"kubernetes.io/projected/a35b162c-6740-4f1d-bb18-3b8abbc84877-kube-api-access-98tmd\") pod \"migrator-59844c95c7-7f9v5\" (UID: \"a35b162c-6740-4f1d-bb18-3b8abbc84877\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064757 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9l4n\" (UniqueName: \"kubernetes.io/projected/02fc3cf7-03bd-4940-956b-51c16a41c310-kube-api-access-m9l4n\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064777 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064821 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064840 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064858 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064878 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.064997 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-service-ca-bundle\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.065058 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.065129 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8qb2\" (UniqueName: \"kubernetes.io/projected/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-kube-api-access-c8qb2\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.065180 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9e559f8-c226-49c4-9354-68241ac7fa62-proxy-tls\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.065205 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8s75\" (UniqueName: \"kubernetes.io/projected/9d1c820f-f578-4833-a1b4-62e9cb304a2b-kube-api-access-f8s75\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.065238 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d23074ca-8045-4137-aa07-78b59df3dbc9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.065267 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02fc3cf7-03bd-4940-956b-51c16a41c310-serving-cert\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.065313 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-client-ca\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.065340 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f99085de-37df-4176-94af-d62782e53e73-config\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066553 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-client-ca\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066639 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d2ded945-4867-4a9b-bc0d-0643f04decbe-proxy-tls\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066686 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066719 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm5wr\" (UniqueName: \"kubernetes.io/projected/8937e868-388c-45cd-9260-972f5fd599e7-kube-api-access-jm5wr\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066774 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066821 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-config\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066866 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066890 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-config\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066924 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d23074ca-8045-4137-aa07-78b59df3dbc9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066946 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhdnp\" (UniqueName: \"kubernetes.io/projected/d2ded945-4867-4a9b-bc0d-0643f04decbe-kube-api-access-lhdnp\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.066981 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3228209f-261f-431c-8552-a83291d93d52-serving-cert\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.067018 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgwfq\" (UniqueName: \"kubernetes.io/projected/3da7b766-72c7-4f00-9787-4fc5d86ef9c0-kube-api-access-jgwfq\") pod \"downloads-7954f5f757-cjq4g\" (UID: \"3da7b766-72c7-4f00-9787-4fc5d86ef9c0\") " pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.067039 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d1c820f-f578-4833-a1b4-62e9cb304a2b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.067062 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.067529 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-config\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071192 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071265 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8937e868-388c-45cd-9260-972f5fd599e7-etcd-client\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071287 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvwrq\" (UniqueName: \"kubernetes.io/projected/5d7f3068-4f27-4e7c-b458-0fa999acbe60-kube-api-access-qvwrq\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071317 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-config\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071337 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-etcd-ca\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071355 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f99085de-37df-4176-94af-d62782e53e73-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071518 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d1c820f-f578-4833-a1b4-62e9cb304a2b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071555 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071572 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/251e8541-289e-4085-9d90-5621ea85ca04-images\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071618 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.071667 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-etcd-client\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.072163 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.074409 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.075050 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-etcd-service-ca\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.075091 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgfgn\" (UniqueName: \"kubernetes.io/projected/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-kube-api-access-qgfgn\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.075180 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgh55\" (UniqueName: \"kubernetes.io/projected/2745246a-5a9d-4f6f-95b1-d4d473571db8-kube-api-access-jgh55\") pod \"control-plane-machine-set-operator-78cbb6b69f-s792g\" (UID: \"2745246a-5a9d-4f6f-95b1-d4d473571db8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.075221 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-serving-cert\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.075254 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d1c820f-f578-4833-a1b4-62e9cb304a2b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.075299 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394f7cfe-7b2d-40d0-a780-fbbe5720101b-config\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.075318 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-etcd-client\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.077280 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-serving-cert\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.077325 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.077399 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-serving-cert\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.077451 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d0e889-a325-491c-bfda-b5d2090f4f97-metrics-tls\") pod \"dns-operator-744455d44c-msmb4\" (UID: \"b6d0e889-a325-491c-bfda-b5d2090f4f97\") " pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.077470 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.077497 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5dbbd3e2-4581-4ca2-9180-3dd62529478e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.077549 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-image-import-ca\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.079310 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-image-import-ca\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.081390 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-serving-cert\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.081495 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.081981 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-encryption-config\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.082056 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-msmb4"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.083684 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cee7777d-8cb6-4c90-a650-4da91faaf021-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.083733 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.085452 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.088917 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cee7777d-8cb6-4c90-a650-4da91faaf021-etcd-client\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.095511 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kfmgt"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.095589 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4vmwt"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.100717 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02fc3cf7-03bd-4940-956b-51c16a41c310-serving-cert\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.103542 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.107473 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tw86n"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.107539 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j5hn8"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.109286 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.110740 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m68bj"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.113498 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.113602 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8rhk6"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.114998 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.115291 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lb5x9"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.116791 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.118166 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8pd9"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.120347 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.122243 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.122931 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.124268 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfxfg"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.125416 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cjq4g"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.126995 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-82445"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.128697 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8r6kd"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.129924 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.131139 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.132465 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.135964 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xms9g"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.137663 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.139285 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8rhk6"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.140436 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.140808 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.142302 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.147025 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-clz74"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.149006 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.155093 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pg4xw"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.158213 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.158542 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z6vfc"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.159503 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-clz74"] Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.160589 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178688 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-etcd-service-ca\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178719 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgfgn\" (UniqueName: \"kubernetes.io/projected/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-kube-api-access-qgfgn\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178748 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgh55\" (UniqueName: \"kubernetes.io/projected/2745246a-5a9d-4f6f-95b1-d4d473571db8-kube-api-access-jgh55\") pod \"control-plane-machine-set-operator-78cbb6b69f-s792g\" (UID: \"2745246a-5a9d-4f6f-95b1-d4d473571db8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178767 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-etcd-client\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178785 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d1c820f-f578-4833-a1b4-62e9cb304a2b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178809 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394f7cfe-7b2d-40d0-a780-fbbe5720101b-config\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178824 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-serving-cert\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178841 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d0e889-a325-491c-bfda-b5d2090f4f97-metrics-tls\") pod \"dns-operator-744455d44c-msmb4\" (UID: \"b6d0e889-a325-491c-bfda-b5d2090f4f97\") " pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178861 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178879 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5dbbd3e2-4581-4ca2-9180-3dd62529478e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178895 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9b42\" (UniqueName: \"kubernetes.io/projected/78ae9445-87b0-4e87-a46c-d2a0146d2051-kube-api-access-h9b42\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178919 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9e559f8-c226-49c4-9354-68241ac7fa62-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178934 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8937e868-388c-45cd-9260-972f5fd599e7-serving-cert\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178950 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178964 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-encryption-config\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178979 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251e8541-289e-4085-9d90-5621ea85ca04-config\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.178997 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6md2\" (UniqueName: \"kubernetes.io/projected/251e8541-289e-4085-9d90-5621ea85ca04-kube-api-access-j6md2\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179016 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/394f7cfe-7b2d-40d0-a780-fbbe5720101b-auth-proxy-config\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179034 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-audit-policies\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179051 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwkf2\" (UniqueName: \"kubernetes.io/projected/9105100f-dce8-42a8-b182-945de577952d-kube-api-access-gwkf2\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179076 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179092 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/394f7cfe-7b2d-40d0-a780-fbbe5720101b-machine-approver-tls\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179109 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f99085de-37df-4176-94af-d62782e53e73-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179129 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m45ms\" (UniqueName: \"kubernetes.io/projected/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-kube-api-access-m45ms\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179149 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-audit-policies\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179165 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-default-certificate\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179182 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d2ded945-4867-4a9b-bc0d-0643f04decbe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179199 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179215 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179233 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x96sp\" (UniqueName: \"kubernetes.io/projected/3228209f-261f-431c-8552-a83291d93d52-kube-api-access-x96sp\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179250 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-config\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179266 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ae9445-87b0-4e87-a46c-d2a0146d2051-service-ca-bundle\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179283 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tndqf\" (UniqueName: \"kubernetes.io/projected/d9e559f8-c226-49c4-9354-68241ac7fa62-kube-api-access-tndqf\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179312 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dbbd3e2-4581-4ca2-9180-3dd62529478e-serving-cert\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179341 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179359 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7f3068-4f27-4e7c-b458-0fa999acbe60-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179378 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg8mn\" (UniqueName: \"kubernetes.io/projected/394f7cfe-7b2d-40d0-a780-fbbe5720101b-kube-api-access-fg8mn\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179397 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2745246a-5a9d-4f6f-95b1-d4d473571db8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s792g\" (UID: \"2745246a-5a9d-4f6f-95b1-d4d473571db8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179415 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9105100f-dce8-42a8-b182-945de577952d-audit-dir\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179448 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-stats-auth\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179465 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e559f8-c226-49c4-9354-68241ac7fa62-images\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179482 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wtls\" (UniqueName: \"kubernetes.io/projected/d23074ca-8045-4137-aa07-78b59df3dbc9-kube-api-access-8wtls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179499 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztdg7\" (UniqueName: \"kubernetes.io/projected/b6d0e889-a325-491c-bfda-b5d2090f4f97-kube-api-access-ztdg7\") pod \"dns-operator-744455d44c-msmb4\" (UID: \"b6d0e889-a325-491c-bfda-b5d2090f4f97\") " pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179515 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7f3068-4f27-4e7c-b458-0fa999acbe60-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179531 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/251e8541-289e-4085-9d90-5621ea85ca04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179547 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctn5\" (UniqueName: \"kubernetes.io/projected/0c0f693c-c46e-4c30-af8b-f962c3362e2a-kube-api-access-qctn5\") pod \"cluster-samples-operator-665b6dd947-r28hj\" (UID: \"0c0f693c-c46e-4c30-af8b-f962c3362e2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179563 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179588 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179604 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d23074ca-8045-4137-aa07-78b59df3dbc9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179620 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7r5g\" (UniqueName: \"kubernetes.io/projected/5dbbd3e2-4581-4ca2-9180-3dd62529478e-kube-api-access-f7r5g\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179635 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c0f693c-c46e-4c30-af8b-f962c3362e2a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r28hj\" (UID: \"0c0f693c-c46e-4c30-af8b-f962c3362e2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179658 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179674 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179689 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-metrics-certs\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179704 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-audit-dir\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179719 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98tmd\" (UniqueName: \"kubernetes.io/projected/a35b162c-6740-4f1d-bb18-3b8abbc84877-kube-api-access-98tmd\") pod \"migrator-59844c95c7-7f9v5\" (UID: \"a35b162c-6740-4f1d-bb18-3b8abbc84877\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179736 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179751 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179773 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179789 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179804 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-service-ca-bundle\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179822 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9e559f8-c226-49c4-9354-68241ac7fa62-proxy-tls\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179840 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8s75\" (UniqueName: \"kubernetes.io/projected/9d1c820f-f578-4833-a1b4-62e9cb304a2b-kube-api-access-f8s75\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179855 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d23074ca-8045-4137-aa07-78b59df3dbc9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179883 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f99085de-37df-4176-94af-d62782e53e73-config\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179902 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179919 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm5wr\" (UniqueName: \"kubernetes.io/projected/8937e868-388c-45cd-9260-972f5fd599e7-kube-api-access-jm5wr\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179933 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d2ded945-4867-4a9b-bc0d-0643f04decbe-proxy-tls\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179949 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179966 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.179988 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3228209f-261f-431c-8552-a83291d93d52-serving-cert\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180004 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgwfq\" (UniqueName: \"kubernetes.io/projected/3da7b766-72c7-4f00-9787-4fc5d86ef9c0-kube-api-access-jgwfq\") pod \"downloads-7954f5f757-cjq4g\" (UID: \"3da7b766-72c7-4f00-9787-4fc5d86ef9c0\") " pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180020 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d1c820f-f578-4833-a1b4-62e9cb304a2b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180035 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-config\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180049 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d23074ca-8045-4137-aa07-78b59df3dbc9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180066 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhdnp\" (UniqueName: \"kubernetes.io/projected/d2ded945-4867-4a9b-bc0d-0643f04decbe-kube-api-access-lhdnp\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180082 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8937e868-388c-45cd-9260-972f5fd599e7-etcd-client\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180096 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f99085de-37df-4176-94af-d62782e53e73-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180113 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvwrq\" (UniqueName: \"kubernetes.io/projected/5d7f3068-4f27-4e7c-b458-0fa999acbe60-kube-api-access-qvwrq\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180128 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-config\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180143 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-etcd-ca\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180159 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d1c820f-f578-4833-a1b4-62e9cb304a2b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180174 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180188 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/251e8541-289e-4085-9d90-5621ea85ca04-images\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.180939 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/251e8541-289e-4085-9d90-5621ea85ca04-images\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.181370 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-etcd-service-ca\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.183072 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-config\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.183461 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394f7cfe-7b2d-40d0-a780-fbbe5720101b-config\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.183613 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8937e868-388c-45cd-9260-972f5fd599e7-etcd-ca\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.184250 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-config\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.184546 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9105100f-dce8-42a8-b182-945de577952d-audit-dir\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.185046 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.185127 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d7f3068-4f27-4e7c-b458-0fa999acbe60-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.185480 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d7f3068-4f27-4e7c-b458-0fa999acbe60-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.185504 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.185855 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-audit-policies\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.185978 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d23074ca-8045-4137-aa07-78b59df3dbc9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.186052 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-audit-dir\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.186279 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.186534 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d2ded945-4867-4a9b-bc0d-0643f04decbe-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.186562 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.186933 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.187148 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3228209f-261f-431c-8552-a83291d93d52-service-ca-bundle\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.187327 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-etcd-client\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.187343 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.187719 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8937e868-388c-45cd-9260-972f5fd599e7-etcd-client\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.188064 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/394f7cfe-7b2d-40d0-a780-fbbe5720101b-auth-proxy-config\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.188196 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.188297 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/251e8541-289e-4085-9d90-5621ea85ca04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.188395 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.188540 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-audit-policies\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.188896 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.189066 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5dbbd3e2-4581-4ca2-9180-3dd62529478e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.189143 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251e8541-289e-4085-9d90-5621ea85ca04-config\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.189287 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d23074ca-8045-4137-aa07-78b59df3dbc9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.189360 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9e559f8-c226-49c4-9354-68241ac7fa62-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.190060 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.190068 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6d0e889-a325-491c-bfda-b5d2090f4f97-metrics-tls\") pod \"dns-operator-744455d44c-msmb4\" (UID: \"b6d0e889-a325-491c-bfda-b5d2090f4f97\") " pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.190237 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5dbbd3e2-4581-4ca2-9180-3dd62529478e-serving-cert\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.190404 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.190477 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.190520 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3228209f-261f-431c-8552-a83291d93d52-serving-cert\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.191491 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.191623 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-serving-cert\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.191699 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-encryption-config\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.191513 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8937e868-388c-45cd-9260-972f5fd599e7-serving-cert\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.191847 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/394f7cfe-7b2d-40d0-a780-fbbe5720101b-machine-approver-tls\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.192367 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0c0f693c-c46e-4c30-af8b-f962c3362e2a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r28hj\" (UID: \"0c0f693c-c46e-4c30-af8b-f962c3362e2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.192694 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.192914 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d2ded945-4867-4a9b-bc0d-0643f04decbe-proxy-tls\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.195278 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.198035 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-stats-auth\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.201398 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.208679 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-metrics-certs\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.219886 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.241009 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.248395 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/78ae9445-87b0-4e87-a46c-d2a0146d2051-default-certificate\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.260167 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.280218 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.287041 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78ae9445-87b0-4e87-a46c-d2a0146d2051-service-ca-bundle\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.300238 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.320336 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.326572 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9e559f8-c226-49c4-9354-68241ac7fa62-images\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.342127 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.360935 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.370133 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9e559f8-c226-49c4-9354-68241ac7fa62-proxy-tls\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.381502 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.400528 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.421325 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.425771 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d1c820f-f578-4833-a1b4-62e9cb304a2b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.447816 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.455001 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d1c820f-f578-4833-a1b4-62e9cb304a2b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.460771 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.481267 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.501055 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.521096 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.540490 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.566794 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.601647 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.620100 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.629271 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2745246a-5a9d-4f6f-95b1-d4d473571db8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s792g\" (UID: \"2745246a-5a9d-4f6f-95b1-d4d473571db8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.640653 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.661640 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.680704 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.692977 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f99085de-37df-4176-94af-d62782e53e73-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.701925 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.708873 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f99085de-37df-4176-94af-d62782e53e73-config\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.721371 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.741896 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.748478 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.760335 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.781403 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.787038 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-config\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.801277 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.820147 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.841075 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.861513 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.880054 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.900736 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.920388 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.941280 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.961647 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.979255 5007 request.go:700] Waited for 1.011593605s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&limit=500&resourceVersion=0 Feb 18 19:40:55 crc kubenswrapper[5007]: I0218 19:40:55.980881 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.001099 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.010284 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.020453 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.027510 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.040772 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.061025 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.080090 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.122760 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.140561 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.163691 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.188348 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.202058 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.220605 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.241380 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.261109 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.281375 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.300991 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.320456 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.341461 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.362011 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.381602 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.401550 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.421203 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.440033 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.461953 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.481522 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.501118 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.520623 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.541263 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.560857 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.581690 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.600746 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.621394 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.641840 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.661346 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.682450 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.706618 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.756228 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v864z\" (UniqueName: \"kubernetes.io/projected/cee7777d-8cb6-4c90-a650-4da91faaf021-kube-api-access-v864z\") pod \"apiserver-76f77b778f-d8pd9\" (UID: \"cee7777d-8cb6-4c90-a650-4da91faaf021\") " pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.772509 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9l4n\" (UniqueName: \"kubernetes.io/projected/02fc3cf7-03bd-4940-956b-51c16a41c310-kube-api-access-m9l4n\") pod \"controller-manager-879f6c89f-gdbm9\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.787319 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.791785 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8qb2\" (UniqueName: \"kubernetes.io/projected/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-kube-api-access-c8qb2\") pod \"route-controller-manager-6576b87f9c-jlmb7\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.801317 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.821927 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.841308 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.861146 5007 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.881609 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.901947 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.921256 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.942265 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.983969 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.989344 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgfgn\" (UniqueName: \"kubernetes.io/projected/3b8e79b9-abbe-4c80-9bce-5c046a75e2d9-kube-api-access-qgfgn\") pod \"apiserver-7bbb656c7d-mh6nh\" (UID: \"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.991780 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.998746 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgh55\" (UniqueName: \"kubernetes.io/projected/2745246a-5a9d-4f6f-95b1-d4d473571db8-kube-api-access-jgh55\") pod \"control-plane-machine-set-operator-78cbb6b69f-s792g\" (UID: \"2745246a-5a9d-4f6f-95b1-d4d473571db8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" Feb 18 19:40:56 crc kubenswrapper[5007]: I0218 19:40:56.998862 5007 request.go:700] Waited for 1.816218851s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.028535 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.029284 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvwrq\" (UniqueName: \"kubernetes.io/projected/5d7f3068-4f27-4e7c-b458-0fa999acbe60-kube-api-access-qvwrq\") pod \"openshift-apiserver-operator-796bbdcf4f-5hb2m\" (UID: \"5d7f3068-4f27-4e7c-b458-0fa999acbe60\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.043988 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d23074ca-8045-4137-aa07-78b59df3dbc9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.048148 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.062707 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.072039 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhdnp\" (UniqueName: \"kubernetes.io/projected/d2ded945-4867-4a9b-bc0d-0643f04decbe-kube-api-access-lhdnp\") pod \"machine-config-controller-84d6567774-9x6zp\" (UID: \"d2ded945-4867-4a9b-bc0d-0643f04decbe\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.083077 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f99085de-37df-4176-94af-d62782e53e73-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4x7tb\" (UID: \"f99085de-37df-4176-94af-d62782e53e73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.105955 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m45ms\" (UniqueName: \"kubernetes.io/projected/b4904a98-07ab-4aab-b7e5-fbcab02bf70d-kube-api-access-m45ms\") pod \"kube-storage-version-migrator-operator-b67b599dd-4pd8w\" (UID: \"b4904a98-07ab-4aab-b7e5-fbcab02bf70d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.138396 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgwfq\" (UniqueName: \"kubernetes.io/projected/3da7b766-72c7-4f00-9787-4fc5d86ef9c0-kube-api-access-jgwfq\") pod \"downloads-7954f5f757-cjq4g\" (UID: \"3da7b766-72c7-4f00-9787-4fc5d86ef9c0\") " pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.163083 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.186960 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d1c820f-f578-4833-a1b4-62e9cb304a2b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.187826 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztdg7\" (UniqueName: \"kubernetes.io/projected/b6d0e889-a325-491c-bfda-b5d2090f4f97-kube-api-access-ztdg7\") pod \"dns-operator-744455d44c-msmb4\" (UID: \"b6d0e889-a325-491c-bfda-b5d2090f4f97\") " pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.190734 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wtls\" (UniqueName: \"kubernetes.io/projected/d23074ca-8045-4137-aa07-78b59df3dbc9-kube-api-access-8wtls\") pod \"cluster-image-registry-operator-dc59b4c8b-mwkdf\" (UID: \"d23074ca-8045-4137-aa07-78b59df3dbc9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.196817 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg8mn\" (UniqueName: \"kubernetes.io/projected/394f7cfe-7b2d-40d0-a780-fbbe5720101b-kube-api-access-fg8mn\") pod \"machine-approver-56656f9798-tfx5l\" (UID: \"394f7cfe-7b2d-40d0-a780-fbbe5720101b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.211883 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.225596 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x96sp\" (UniqueName: \"kubernetes.io/projected/3228209f-261f-431c-8552-a83291d93d52-kube-api-access-x96sp\") pod \"authentication-operator-69f744f599-8r6kd\" (UID: \"3228209f-261f-431c-8552-a83291d93d52\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.235232 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctn5\" (UniqueName: \"kubernetes.io/projected/0c0f693c-c46e-4c30-af8b-f962c3362e2a-kube-api-access-qctn5\") pod \"cluster-samples-operator-665b6dd947-r28hj\" (UID: \"0c0f693c-c46e-4c30-af8b-f962c3362e2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.250727 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.263344 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.275369 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tndqf\" (UniqueName: \"kubernetes.io/projected/d9e559f8-c226-49c4-9354-68241ac7fa62-kube-api-access-tndqf\") pod \"machine-config-operator-74547568cd-pbmcj\" (UID: \"d9e559f8-c226-49c4-9354-68241ac7fa62\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.279838 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7r5g\" (UniqueName: \"kubernetes.io/projected/5dbbd3e2-4581-4ca2-9180-3dd62529478e-kube-api-access-f7r5g\") pod \"openshift-config-operator-7777fb866f-m68bj\" (UID: \"5dbbd3e2-4581-4ca2-9180-3dd62529478e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.282643 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.295779 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98tmd\" (UniqueName: \"kubernetes.io/projected/a35b162c-6740-4f1d-bb18-3b8abbc84877-kube-api-access-98tmd\") pod \"migrator-59844c95c7-7f9v5\" (UID: \"a35b162c-6740-4f1d-bb18-3b8abbc84877\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.319065 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8s75\" (UniqueName: \"kubernetes.io/projected/9d1c820f-f578-4833-a1b4-62e9cb304a2b-kube-api-access-f8s75\") pod \"ingress-operator-5b745b69d9-lc6l2\" (UID: \"9d1c820f-f578-4833-a1b4-62e9cb304a2b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.328205 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.335632 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.344285 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm5wr\" (UniqueName: \"kubernetes.io/projected/8937e868-388c-45cd-9260-972f5fd599e7-kube-api-access-jm5wr\") pod \"etcd-operator-b45778765-lb5x9\" (UID: \"8937e868-388c-45cd-9260-972f5fd599e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.354343 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.363869 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.363917 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.364865 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwkf2\" (UniqueName: \"kubernetes.io/projected/9105100f-dce8-42a8-b182-945de577952d-kube-api-access-gwkf2\") pod \"oauth-openshift-558db77b4-b2xj7\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.373708 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.380820 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6md2\" (UniqueName: \"kubernetes.io/projected/251e8541-289e-4085-9d90-5621ea85ca04-kube-api-access-j6md2\") pod \"machine-api-operator-5694c8668f-j5hn8\" (UID: \"251e8541-289e-4085-9d90-5621ea85ca04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.381047 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.403673 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7"] Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.431660 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.435416 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g"] Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.436774 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8pd9"] Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.438610 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba240294-0bf3-46a0-8fa5-7d2029e3fd56-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cklm5\" (UID: \"ba240294-0bf3-46a0-8fa5-7d2029e3fd56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.439037 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9b42\" (UniqueName: \"kubernetes.io/projected/78ae9445-87b0-4e87-a46c-d2a0146d2051-kube-api-access-h9b42\") pod \"router-default-5444994796-764l7\" (UID: \"78ae9445-87b0-4e87-a46c-d2a0146d2051\") " pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.453605 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" Feb 18 19:40:57 crc kubenswrapper[5007]: W0218 19:40:57.474977 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcee7777d_8cb6_4c90_a650_4da91faaf021.slice/crio-896b3c823d96f2c7b7ab2da7758ad80e5fb994dd1355af5421fa0fe09071eef1 WatchSource:0}: Error finding container 896b3c823d96f2c7b7ab2da7758ad80e5fb994dd1355af5421fa0fe09071eef1: Status 404 returned error can't find the container with id 896b3c823d96f2c7b7ab2da7758ad80e5fb994dd1355af5421fa0fe09071eef1 Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.477895 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-config\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.477926 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fd9d95-9876-43a9-b061-b5ef6bb15741-config\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.477946 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fd9d95-9876-43a9-b061-b5ef6bb15741-serving-cert\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.477967 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.477999 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-service-ca\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478018 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-trusted-ca-bundle\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478035 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478054 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5accf517-2cda-44c4-b3ce-f57e02ec9fc5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s9wh5\" (UID: \"5accf517-2cda-44c4-b3ce-f57e02ec9fc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478078 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478114 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-certificates\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478130 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478167 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6t22\" (UniqueName: \"kubernetes.io/projected/7afb27dc-b979-45ec-ae79-5595d0dedb6e-kube-api-access-n6t22\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478191 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-oauth-serving-cert\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478217 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-bound-sa-token\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478234 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzdhd\" (UniqueName: \"kubernetes.io/projected/b1fd9d95-9876-43a9-b061-b5ef6bb15741-kube-api-access-qzdhd\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478254 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f895f35a-b144-444a-9329-3e7c3d0a1d9b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478269 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx94p\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-kube-api-access-qx94p\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478283 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-tls\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478300 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1fd9d95-9876-43a9-b061-b5ef6bb15741-trusted-ca\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478323 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-serving-cert\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478341 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vhb\" (UniqueName: \"kubernetes.io/projected/5accf517-2cda-44c4-b3ce-f57e02ec9fc5-kube-api-access-n4vhb\") pod \"package-server-manager-789f6589d5-s9wh5\" (UID: \"5accf517-2cda-44c4-b3ce-f57e02ec9fc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478362 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-oauth-config\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478379 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f895f35a-b144-444a-9329-3e7c3d0a1d9b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.478394 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-trusted-ca\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: E0218 19:40:57.478716 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:57.978704079 +0000 UTC m=+142.760823613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.503683 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.532085 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.534314 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh"] Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.567803 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cjq4g"] Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.570081 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gdbm9"] Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.573257 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579011 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579251 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-mountpoint-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579294 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-bound-sa-token\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579320 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/370cd8fe-7bde-4e36-b8bc-dca582137d44-secret-volume\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579339 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/075a793c-9e04-43eb-80ee-b0c10d727b22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579389 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzdhd\" (UniqueName: \"kubernetes.io/projected/b1fd9d95-9876-43a9-b061-b5ef6bb15741-kube-api-access-qzdhd\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579405 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgk57\" (UniqueName: \"kubernetes.io/projected/c17be73d-c271-43eb-b4c6-425ead314225-kube-api-access-sgk57\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579424 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-registration-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579461 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qm6b\" (UniqueName: \"kubernetes.io/projected/f52441fb-f37f-4482-8564-7e6527e2b16a-kube-api-access-9qm6b\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579479 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kfmgt\" (UID: \"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579529 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f895f35a-b144-444a-9329-3e7c3d0a1d9b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579551 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx94p\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-kube-api-access-qx94p\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579570 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-tls\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579628 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1fd9d95-9876-43a9-b061-b5ef6bb15741-trusted-ca\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579644 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndxsf\" (UniqueName: \"kubernetes.io/projected/370cd8fe-7bde-4e36-b8bc-dca582137d44-kube-api-access-ndxsf\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579661 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvqdd\" (UniqueName: \"kubernetes.io/projected/5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1-kube-api-access-vvqdd\") pod \"multus-admission-controller-857f4d67dd-kfmgt\" (UID: \"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579679 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/95a8b9d6-c7e2-4654-b113-fa4db3836199-certs\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579714 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-serving-cert\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579751 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075a793c-9e04-43eb-80ee-b0c10d727b22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579808 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vhb\" (UniqueName: \"kubernetes.io/projected/5accf517-2cda-44c4-b3ce-f57e02ec9fc5-kube-api-access-n4vhb\") pod \"package-server-manager-789f6589d5-s9wh5\" (UID: \"5accf517-2cda-44c4-b3ce-f57e02ec9fc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579836 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/370cd8fe-7bde-4e36-b8bc-dca582137d44-config-volume\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: E0218 19:40:57.579866 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.079846676 +0000 UTC m=+142.861966200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579912 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f52441fb-f37f-4482-8564-7e6527e2b16a-config-volume\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.579964 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sv8k\" (UniqueName: \"kubernetes.io/projected/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-kube-api-access-4sv8k\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580004 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-webhook-cert\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580020 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0457430-d92a-464b-8cc7-e36883e9c718-profile-collector-cert\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580036 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vws6t\" (UniqueName: \"kubernetes.io/projected/075a793c-9e04-43eb-80ee-b0c10d727b22-kube-api-access-vws6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580106 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580123 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-signing-cabundle\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580196 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-oauth-config\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580213 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-signing-key\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580236 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f895f35a-b144-444a-9329-3e7c3d0a1d9b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580291 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-trusted-ca\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580306 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/67928300-0460-40b5-bc58-e286187b315e-srv-cert\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580352 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-config\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580368 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fd9d95-9876-43a9-b061-b5ef6bb15741-config\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580401 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fd9d95-9876-43a9-b061-b5ef6bb15741-serving-cert\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580418 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580478 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/95a8b9d6-c7e2-4654-b113-fa4db3836199-node-bootstrap-token\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580496 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/67928300-0460-40b5-bc58-e286187b315e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580515 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580544 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0457430-d92a-464b-8cc7-e36883e9c718-srv-cert\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580629 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrm6\" (UniqueName: \"kubernetes.io/projected/c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b-kube-api-access-srrm6\") pod \"ingress-canary-z6vfc\" (UID: \"c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b\") " pod="openshift-ingress-canary/ingress-canary-z6vfc" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580688 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e485d24d-a88f-49c4-83e3-1f3b9b03663a-config\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580707 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-service-ca\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580725 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-tmpfs\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580753 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-trusted-ca-bundle\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580773 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580790 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7p96\" (UniqueName: \"kubernetes.io/projected/e7e8415f-6c51-452d-b0dc-13edd5475f6d-kube-api-access-l7p96\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580845 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5accf517-2cda-44c4-b3ce-f57e02ec9fc5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s9wh5\" (UID: \"5accf517-2cda-44c4-b3ce-f57e02ec9fc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580869 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7b57\" (UniqueName: \"kubernetes.io/projected/e485d24d-a88f-49c4-83e3-1f3b9b03663a-kube-api-access-x7b57\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580927 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-apiservice-cert\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580960 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.580984 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-certificates\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581011 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581036 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g88h\" (UniqueName: \"kubernetes.io/projected/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-kube-api-access-9g88h\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581120 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjv8\" (UniqueName: \"kubernetes.io/projected/67928300-0460-40b5-bc58-e286187b315e-kube-api-access-lgjv8\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581137 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-socket-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581152 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e485d24d-a88f-49c4-83e3-1f3b9b03663a-serving-cert\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581169 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr76g\" (UniqueName: \"kubernetes.io/projected/a0457430-d92a-464b-8cc7-e36883e9c718-kube-api-access-vr76g\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581198 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-plugins-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581215 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2z52\" (UniqueName: \"kubernetes.io/projected/95a8b9d6-c7e2-4654-b113-fa4db3836199-kube-api-access-b2z52\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581315 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6t22\" (UniqueName: \"kubernetes.io/projected/7afb27dc-b979-45ec-ae79-5595d0dedb6e-kube-api-access-n6t22\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581340 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f52441fb-f37f-4482-8564-7e6527e2b16a-metrics-tls\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581361 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b-cert\") pod \"ingress-canary-z6vfc\" (UID: \"c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b\") " pod="openshift-ingress-canary/ingress-canary-z6vfc" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581388 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-csi-data-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.581468 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-oauth-serving-cert\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.582144 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-oauth-serving-cert\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.583471 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-trusted-ca\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: E0218 19:40:57.584156 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.084143662 +0000 UTC m=+142.866263186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.584717 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-config\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.585837 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1fd9d95-9876-43a9-b061-b5ef6bb15741-trusted-ca\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.585915 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f895f35a-b144-444a-9329-3e7c3d0a1d9b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.586192 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fd9d95-9876-43a9-b061-b5ef6bb15741-config\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.586841 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-certificates\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.587761 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.589497 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-trusted-ca-bundle\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.590031 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-serving-cert\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.590771 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.591249 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-service-ca\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.591578 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1fd9d95-9876-43a9-b061-b5ef6bb15741-serving-cert\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.591582 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f895f35a-b144-444a-9329-3e7c3d0a1d9b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.593074 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-tls\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.597552 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-oauth-config\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.603678 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5accf517-2cda-44c4-b3ce-f57e02ec9fc5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s9wh5\" (UID: \"5accf517-2cda-44c4-b3ce-f57e02ec9fc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.622509 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-bound-sa-token\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.623047 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.657582 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b8a38a2-828b-4d45-989f-f3b42f5c7ef5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xjcf2\" (UID: \"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.672133 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6t22\" (UniqueName: \"kubernetes.io/projected/7afb27dc-b979-45ec-ae79-5595d0dedb6e-kube-api-access-n6t22\") pod \"console-f9d7485db-5kxmr\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.686680 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.686908 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrm6\" (UniqueName: \"kubernetes.io/projected/c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b-kube-api-access-srrm6\") pod \"ingress-canary-z6vfc\" (UID: \"c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b\") " pod="openshift-ingress-canary/ingress-canary-z6vfc" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.686939 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e485d24d-a88f-49c4-83e3-1f3b9b03663a-config\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.686961 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-tmpfs\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.686985 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7p96\" (UniqueName: \"kubernetes.io/projected/e7e8415f-6c51-452d-b0dc-13edd5475f6d-kube-api-access-l7p96\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687016 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7b57\" (UniqueName: \"kubernetes.io/projected/e485d24d-a88f-49c4-83e3-1f3b9b03663a-kube-api-access-x7b57\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687034 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-apiservice-cert\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687060 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g88h\" (UniqueName: \"kubernetes.io/projected/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-kube-api-access-9g88h\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687089 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgjv8\" (UniqueName: \"kubernetes.io/projected/67928300-0460-40b5-bc58-e286187b315e-kube-api-access-lgjv8\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687107 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-socket-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687123 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e485d24d-a88f-49c4-83e3-1f3b9b03663a-serving-cert\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687140 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr76g\" (UniqueName: \"kubernetes.io/projected/a0457430-d92a-464b-8cc7-e36883e9c718-kube-api-access-vr76g\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687157 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-plugins-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687173 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2z52\" (UniqueName: \"kubernetes.io/projected/95a8b9d6-c7e2-4654-b113-fa4db3836199-kube-api-access-b2z52\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687194 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f52441fb-f37f-4482-8564-7e6527e2b16a-metrics-tls\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687218 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b-cert\") pod \"ingress-canary-z6vfc\" (UID: \"c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b\") " pod="openshift-ingress-canary/ingress-canary-z6vfc" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687250 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-csi-data-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687284 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-mountpoint-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687318 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/370cd8fe-7bde-4e36-b8bc-dca582137d44-secret-volume\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687351 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgk57\" (UniqueName: \"kubernetes.io/projected/c17be73d-c271-43eb-b4c6-425ead314225-kube-api-access-sgk57\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687372 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/075a793c-9e04-43eb-80ee-b0c10d727b22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687397 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qm6b\" (UniqueName: \"kubernetes.io/projected/f52441fb-f37f-4482-8564-7e6527e2b16a-kube-api-access-9qm6b\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.687897 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-socket-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: E0218 19:40:57.688413 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.187415782 +0000 UTC m=+142.969535306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688465 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kfmgt\" (UID: \"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688496 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-registration-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688531 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndxsf\" (UniqueName: \"kubernetes.io/projected/370cd8fe-7bde-4e36-b8bc-dca582137d44-kube-api-access-ndxsf\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688551 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvqdd\" (UniqueName: \"kubernetes.io/projected/5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1-kube-api-access-vvqdd\") pod \"multus-admission-controller-857f4d67dd-kfmgt\" (UID: \"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688568 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/95a8b9d6-c7e2-4654-b113-fa4db3836199-certs\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688607 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075a793c-9e04-43eb-80ee-b0c10d727b22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688643 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/370cd8fe-7bde-4e36-b8bc-dca582137d44-config-volume\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688664 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f52441fb-f37f-4482-8564-7e6527e2b16a-config-volume\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688686 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sv8k\" (UniqueName: \"kubernetes.io/projected/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-kube-api-access-4sv8k\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688710 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-webhook-cert\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688731 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0457430-d92a-464b-8cc7-e36883e9c718-profile-collector-cert\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688759 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vws6t\" (UniqueName: \"kubernetes.io/projected/075a793c-9e04-43eb-80ee-b0c10d727b22-kube-api-access-vws6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688783 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688804 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-signing-cabundle\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688826 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-signing-key\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688850 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/67928300-0460-40b5-bc58-e286187b315e-srv-cert\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688882 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/95a8b9d6-c7e2-4654-b113-fa4db3836199-node-bootstrap-token\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688898 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/67928300-0460-40b5-bc58-e286187b315e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688920 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.688952 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0457430-d92a-464b-8cc7-e36883e9c718-srv-cert\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.690032 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-plugins-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.690798 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075a793c-9e04-43eb-80ee-b0c10d727b22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.691460 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-csi-data-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.691563 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-mountpoint-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.691727 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c17be73d-c271-43eb-b4c6-425ead314225-registration-dir\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.692129 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-tmpfs\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.693794 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/370cd8fe-7bde-4e36-b8bc-dca582137d44-config-volume\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.693804 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e485d24d-a88f-49c4-83e3-1f3b9b03663a-config\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.694303 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f52441fb-f37f-4482-8564-7e6527e2b16a-config-volume\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.694534 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.695176 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-signing-cabundle\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.697815 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.701555 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-apiservice-cert\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.705805 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vhb\" (UniqueName: \"kubernetes.io/projected/5accf517-2cda-44c4-b3ce-f57e02ec9fc5-kube-api-access-n4vhb\") pod \"package-server-manager-789f6589d5-s9wh5\" (UID: \"5accf517-2cda-44c4-b3ce-f57e02ec9fc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.709551 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b-cert\") pod \"ingress-canary-z6vfc\" (UID: \"c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b\") " pod="openshift-ingress-canary/ingress-canary-z6vfc" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.711347 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e485d24d-a88f-49c4-83e3-1f3b9b03663a-serving-cert\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.711660 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0457430-d92a-464b-8cc7-e36883e9c718-profile-collector-cert\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.712505 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/370cd8fe-7bde-4e36-b8bc-dca582137d44-secret-volume\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.719585 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzdhd\" (UniqueName: \"kubernetes.io/projected/b1fd9d95-9876-43a9-b061-b5ef6bb15741-kube-api-access-qzdhd\") pod \"console-operator-58897d9998-tw86n\" (UID: \"b1fd9d95-9876-43a9-b061-b5ef6bb15741\") " pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.720359 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0457430-d92a-464b-8cc7-e36883e9c718-srv-cert\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.720817 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/67928300-0460-40b5-bc58-e286187b315e-srv-cert\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.721221 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f52441fb-f37f-4482-8564-7e6527e2b16a-metrics-tls\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.721551 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/075a793c-9e04-43eb-80ee-b0c10d727b22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.721609 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/95a8b9d6-c7e2-4654-b113-fa4db3836199-certs\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.722057 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-webhook-cert\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.723338 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx94p\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-kube-api-access-qx94p\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.724042 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.724548 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/67928300-0460-40b5-bc58-e286187b315e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.724675 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-signing-key\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.724730 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kfmgt\" (UID: \"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.725191 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/95a8b9d6-c7e2-4654-b113-fa4db3836199-node-bootstrap-token\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.761039 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g88h\" (UniqueName: \"kubernetes.io/projected/2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29-kube-api-access-9g88h\") pod \"service-ca-9c57cc56f-82445\" (UID: \"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29\") " pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.787105 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qm6b\" (UniqueName: \"kubernetes.io/projected/f52441fb-f37f-4482-8564-7e6527e2b16a-kube-api-access-9qm6b\") pod \"dns-default-8rhk6\" (UID: \"f52441fb-f37f-4482-8564-7e6527e2b16a\") " pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.790626 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: E0218 19:40:57.791084 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.291068903 +0000 UTC m=+143.073188427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.796366 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgjv8\" (UniqueName: \"kubernetes.io/projected/67928300-0460-40b5-bc58-e286187b315e-kube-api-access-lgjv8\") pod \"olm-operator-6b444d44fb-z6x8g\" (UID: \"67928300-0460-40b5-bc58-e286187b315e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.822360 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.824005 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr76g\" (UniqueName: \"kubernetes.io/projected/a0457430-d92a-464b-8cc7-e36883e9c718-kube-api-access-vr76g\") pod \"catalog-operator-68c6474976-sz9kn\" (UID: \"a0457430-d92a-464b-8cc7-e36883e9c718\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.840902 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.891803 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:57 crc kubenswrapper[5007]: E0218 19:40:57.892272 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.392252401 +0000 UTC m=+143.174371925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.897667 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-msmb4"] Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.934278 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrm6\" (UniqueName: \"kubernetes.io/projected/c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b-kube-api-access-srrm6\") pod \"ingress-canary-z6vfc\" (UID: \"c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b\") " pod="openshift-ingress-canary/ingress-canary-z6vfc" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.934289 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgk57\" (UniqueName: \"kubernetes.io/projected/c17be73d-c271-43eb-b4c6-425ead314225-kube-api-access-sgk57\") pod \"csi-hostpathplugin-clz74\" (UID: \"c17be73d-c271-43eb-b4c6-425ead314225\") " pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.936422 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2z52\" (UniqueName: \"kubernetes.io/projected/95a8b9d6-c7e2-4654-b113-fa4db3836199-kube-api-access-b2z52\") pod \"machine-config-server-pg4xw\" (UID: \"95a8b9d6-c7e2-4654-b113-fa4db3836199\") " pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.939734 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndxsf\" (UniqueName: \"kubernetes.io/projected/370cd8fe-7bde-4e36-b8bc-dca582137d44-kube-api-access-ndxsf\") pod \"collect-profiles-29524050-b7rpb\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.950088 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.957536 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7p96\" (UniqueName: \"kubernetes.io/projected/e7e8415f-6c51-452d-b0dc-13edd5475f6d-kube-api-access-l7p96\") pod \"marketplace-operator-79b997595-tfxfg\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.964690 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.985168 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.993783 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:57 crc kubenswrapper[5007]: E0218 19:40:57.994092 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.494079559 +0000 UTC m=+143.276199083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:57 crc kubenswrapper[5007]: I0218 19:40:57.994758 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvqdd\" (UniqueName: \"kubernetes.io/projected/5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1-kube-api-access-vvqdd\") pod \"multus-admission-controller-857f4d67dd-kfmgt\" (UID: \"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.025640 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sv8k\" (UniqueName: \"kubernetes.io/projected/eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7-kube-api-access-4sv8k\") pod \"packageserver-d55dfcdfc-pgvrb\" (UID: \"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.026476 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7b57\" (UniqueName: \"kubernetes.io/projected/e485d24d-a88f-49c4-83e3-1f3b9b03663a-kube-api-access-x7b57\") pod \"service-ca-operator-777779d784-xms9g\" (UID: \"e485d24d-a88f-49c4-83e3-1f3b9b03663a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.032377 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-82445" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.033084 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.034092 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.034118 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.034529 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.041137 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z6vfc" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.046525 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vws6t\" (UniqueName: \"kubernetes.io/projected/075a793c-9e04-43eb-80ee-b0c10d727b22-kube-api-access-vws6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-d6vc6\" (UID: \"075a793c-9e04-43eb-80ee-b0c10d727b22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.048096 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8rhk6" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.086836 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-clz74" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.087467 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pg4xw" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.114457 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.114805 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.614778482 +0000 UTC m=+143.396898006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.215638 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.216270 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.716258739 +0000 UTC m=+143.498378263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.278782 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.290463 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.298248 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.316631 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.317142 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.817128018 +0000 UTC m=+143.599247542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.420180 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.420993 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:58.920975744 +0000 UTC m=+143.703095268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.523369 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.524781 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.024752829 +0000 UTC m=+143.806872353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.526990 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.559964 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" event={"ID":"02fc3cf7-03bd-4940-956b-51c16a41c310","Type":"ContainerStarted","Data":"1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.560000 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" event={"ID":"02fc3cf7-03bd-4940-956b-51c16a41c310","Type":"ContainerStarted","Data":"3c167e66a7e7499c18a767d416bdac838133a14f1c303d21646d6f6a2d5770b7"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.560861 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.565789 5007 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-gdbm9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.565837 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" podUID="02fc3cf7-03bd-4940-956b-51c16a41c310" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.572872 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" event={"ID":"394f7cfe-7b2d-40d0-a780-fbbe5720101b","Type":"ContainerStarted","Data":"48b13e53cdff771d9c1b002a9a2444bd8571378139ced273f1da6f9a9114fb3a"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.572946 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" event={"ID":"394f7cfe-7b2d-40d0-a780-fbbe5720101b","Type":"ContainerStarted","Data":"6214412860ab6da8294ba5f08ed158d4d0ab2114bbe39acf683b9cf73140d64a"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.585483 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.616880 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cjq4g" event={"ID":"3da7b766-72c7-4f00-9787-4fc5d86ef9c0","Type":"ContainerStarted","Data":"d5aee573afc0f80e121754fa1a797cbd6005b7be81daba670a18619e61ca1991"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.616922 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cjq4g" event={"ID":"3da7b766-72c7-4f00-9787-4fc5d86ef9c0","Type":"ContainerStarted","Data":"a7910d5c5bafcdbc1ab383cbedbd41f489d01eb778662ff6f12e24839ab6501c"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.617460 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.625920 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.625970 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.626823 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.628369 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.128358678 +0000 UTC m=+143.910478202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.630777 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pg4xw" event={"ID":"95a8b9d6-c7e2-4654-b113-fa4db3836199","Type":"ContainerStarted","Data":"66bad040cfcae631b0429e0aadbec8655e876ffc9c75d0d29fac8f4dba8c4b98"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.635419 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" event={"ID":"2745246a-5a9d-4f6f-95b1-d4d473571db8","Type":"ContainerStarted","Data":"d02e906679686ddc2d7593e3d0d9a82e901927798f07752775354ab27d67c0a9"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.635517 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" event={"ID":"2745246a-5a9d-4f6f-95b1-d4d473571db8","Type":"ContainerStarted","Data":"5a183d4dba5636fdfadad636ca9a78eb670ba316054067fda321f15eed9d30fe"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.654484 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" event={"ID":"41d2c625-82ed-4b63-bd0b-9acf5199e4f6","Type":"ContainerStarted","Data":"4d763ccb2bdf90ced8a32d2ae1ae5c8e3090d26b92f0dce9d2269ea3f69bceee"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.654545 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" event={"ID":"41d2c625-82ed-4b63-bd0b-9acf5199e4f6","Type":"ContainerStarted","Data":"d2ec36fe0c06efa238816b19da993d22cd2f92bc953b3882cef1394d5c734ab0"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.656006 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.657580 5007 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jlmb7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.657627 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" podUID="41d2c625-82ed-4b63-bd0b-9acf5199e4f6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.667079 5007 generic.go:334] "Generic (PLEG): container finished" podID="cee7777d-8cb6-4c90-a650-4da91faaf021" containerID="d06e120abfdc00721dfe23d1b314fed4607bb3ae271ec3ab65c7f203995e992a" exitCode=0 Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.667157 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" event={"ID":"cee7777d-8cb6-4c90-a650-4da91faaf021","Type":"ContainerDied","Data":"d06e120abfdc00721dfe23d1b314fed4607bb3ae271ec3ab65c7f203995e992a"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.667194 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" event={"ID":"cee7777d-8cb6-4c90-a650-4da91faaf021","Type":"ContainerStarted","Data":"896b3c823d96f2c7b7ab2da7758ad80e5fb994dd1355af5421fa0fe09071eef1"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.669394 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-764l7" event={"ID":"78ae9445-87b0-4e87-a46c-d2a0146d2051","Type":"ContainerStarted","Data":"90a44ffb98e82a451857e27e563b3d226d25390d2c0099af2e51aa28825b6e73"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.669472 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-764l7" event={"ID":"78ae9445-87b0-4e87-a46c-d2a0146d2051","Type":"ContainerStarted","Data":"48a272222770c220b34e85126d813258fcb74fd420d3510bdc1759583c145ad0"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.674279 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" event={"ID":"b6d0e889-a325-491c-bfda-b5d2090f4f97","Type":"ContainerStarted","Data":"0c8233e584088be9052892114f07bd86b86fe3d960980a065f8ed033f5168954"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.678869 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.694356 5007 generic.go:334] "Generic (PLEG): container finished" podID="3b8e79b9-abbe-4c80-9bce-5c046a75e2d9" containerID="efdc84fcc65b37a736fe817bd8d342816ac5cc057ad0922cd6b65fb38c02c364" exitCode=0 Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.694419 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" event={"ID":"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9","Type":"ContainerDied","Data":"efdc84fcc65b37a736fe817bd8d342816ac5cc057ad0922cd6b65fb38c02c364"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.694470 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" event={"ID":"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9","Type":"ContainerStarted","Data":"ae1c82cfec2e6faa0a1fe0392cd2770aad572b4923e4a7f37fff85633c789b44"} Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.697543 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.699403 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.728953 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.731317 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m"] Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.752189 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.252148981 +0000 UTC m=+144.034268505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.790559 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j5hn8"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.792006 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8r6kd"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.832274 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.844802 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.846853 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.346839339 +0000 UTC m=+144.128958863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.857688 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj"] Feb 18 19:40:58 crc kubenswrapper[5007]: W0218 19:40:58.875640 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda35b162c_6740_4f1d_bb18_3b8abbc84877.slice/crio-9e3b95074fa189450bf2b54acca45997ebe13d56ceb2a9afd6a706cc5ce77509 WatchSource:0}: Error finding container 9e3b95074fa189450bf2b54acca45997ebe13d56ceb2a9afd6a706cc5ce77509: Status 404 returned error can't find the container with id 9e3b95074fa189450bf2b54acca45997ebe13d56ceb2a9afd6a706cc5ce77509 Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.878967 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m68bj"] Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.916662 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s792g" podStartSLOduration=121.916649003 podStartE2EDuration="2m1.916649003s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:58.914449979 +0000 UTC m=+143.696569513" watchObservedRunningTime="2026-02-18 19:40:58.916649003 +0000 UTC m=+143.698768527" Feb 18 19:40:58 crc kubenswrapper[5007]: I0218 19:40:58.946109 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:58 crc kubenswrapper[5007]: E0218 19:40:58.946425 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.446411729 +0000 UTC m=+144.228531253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.047546 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.047954 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.547937668 +0000 UTC m=+144.330057192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.151118 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.151376 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.651317121 +0000 UTC m=+144.433436645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.151524 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.154619 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.654051632 +0000 UTC m=+144.436171156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.253443 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.253756 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.753728656 +0000 UTC m=+144.535848200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.332350 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b2xj7"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.339030 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lb5x9"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.351259 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.368286 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.368722 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.8687087 +0000 UTC m=+144.650828224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.445718 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w"] Feb 18 19:40:59 crc kubenswrapper[5007]: W0218 19:40:59.446604 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8937e868_388c_45cd_9260_972f5fd599e7.slice/crio-25da53a1e76f479d3a64cbc6a59a699352748faad930f24f20c67d1ea0a705d2 WatchSource:0}: Error finding container 25da53a1e76f479d3a64cbc6a59a699352748faad930f24f20c67d1ea0a705d2: Status 404 returned error can't find the container with id 25da53a1e76f479d3a64cbc6a59a699352748faad930f24f20c67d1ea0a705d2 Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.449368 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kfmgt"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.451149 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8rhk6"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.454181 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cjq4g" podStartSLOduration=122.454159955 podStartE2EDuration="2m2.454159955s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.441113141 +0000 UTC m=+144.223232675" watchObservedRunningTime="2026-02-18 19:40:59.454159955 +0000 UTC m=+144.236279469" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.454479 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.470999 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.471577 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:40:59.971556927 +0000 UTC m=+144.753676451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: W0218 19:40:59.509057 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4904a98_07ab_4aab_b7e5_fbcab02bf70d.slice/crio-1277e25472801d6b68aa961f2a7e3475b005d6a76ca748ce992e074f77440841 WatchSource:0}: Error finding container 1277e25472801d6b68aa961f2a7e3475b005d6a76ca748ce992e074f77440841: Status 404 returned error can't find the container with id 1277e25472801d6b68aa961f2a7e3475b005d6a76ca748ce992e074f77440841 Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.525985 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-764l7" podStartSLOduration=122.525967749 podStartE2EDuration="2m2.525967749s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.518327634 +0000 UTC m=+144.300447168" watchObservedRunningTime="2026-02-18 19:40:59.525967749 +0000 UTC m=+144.308087263" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.528604 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5kxmr"] Feb 18 19:40:59 crc kubenswrapper[5007]: W0218 19:40:59.564240 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf52441fb_f37f_4482_8564_7e6527e2b16a.slice/crio-bf42ea60038eecc55de2d8ad1f0fee4e081d9a9bd4548ba8702e61976074c153 WatchSource:0}: Error finding container bf42ea60038eecc55de2d8ad1f0fee4e081d9a9bd4548ba8702e61976074c153: Status 404 returned error can't find the container with id bf42ea60038eecc55de2d8ad1f0fee4e081d9a9bd4548ba8702e61976074c153 Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.567032 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.567077 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" podStartSLOduration=122.567052628 podStartE2EDuration="2m2.567052628s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.553115658 +0000 UTC m=+144.335235182" watchObservedRunningTime="2026-02-18 19:40:59.567052628 +0000 UTC m=+144.349172152" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.573018 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.574455 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.074440146 +0000 UTC m=+144.856559670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.625027 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.644821 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:40:59 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:40:59 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:40:59 crc kubenswrapper[5007]: healthz check failed Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.645059 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.657893 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.671491 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.671590 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-clz74"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.674929 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.675375 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.175357006 +0000 UTC m=+144.957476530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.700913 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.709842 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-82445"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.731605 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.735634 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tw86n"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.736175 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" podStartSLOduration=121.736150866 podStartE2EDuration="2m1.736150866s" podCreationTimestamp="2026-02-18 19:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.729595783 +0000 UTC m=+144.511715307" watchObservedRunningTime="2026-02-18 19:40:59.736150866 +0000 UTC m=+144.518270380" Feb 18 19:40:59 crc kubenswrapper[5007]: W0218 19:40:59.743573 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod075a793c_9e04_43eb_80ee_b0c10d727b22.slice/crio-1b7a8c028ce7e4740a0137b68752048f487ad181fe61307562d4d0004983edad WatchSource:0}: Error finding container 1b7a8c028ce7e4740a0137b68752048f487ad181fe61307562d4d0004983edad: Status 404 returned error can't find the container with id 1b7a8c028ce7e4740a0137b68752048f487ad181fe61307562d4d0004983edad Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.770461 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z6vfc"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.787556 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xms9g"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.788046 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.788233 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" event={"ID":"b6d0e889-a325-491c-bfda-b5d2090f4f97","Type":"ContainerStarted","Data":"8b554622cef4bb852e2e077540279dcb93c970cbc32a713dabe0887f6e3aaaf4"} Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.788487 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.288468796 +0000 UTC m=+145.070588320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.790971 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" event={"ID":"5accf517-2cda-44c4-b3ce-f57e02ec9fc5","Type":"ContainerStarted","Data":"a92357c9bd39bb9195bcac9138dada1dcdd0ade23c598823bea10a56325c5a94"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.801359 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" event={"ID":"0c0f693c-c46e-4c30-af8b-f962c3362e2a","Type":"ContainerStarted","Data":"7f1ac85e432da7abbc640b94f0c92b81fd5ce4c9346936717b3ec5701047bd67"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.805887 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" event={"ID":"d2ded945-4867-4a9b-bc0d-0643f04decbe","Type":"ContainerStarted","Data":"ae3af0f13d25c7a8ddfc08a95bde5f15424d9e75cd7a8a18603123ff58dc86da"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.806277 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" event={"ID":"d2ded945-4867-4a9b-bc0d-0643f04decbe","Type":"ContainerStarted","Data":"74a468f6f642805c3db32d3a0bfeb7fe3ec27cbd21f98b308296d19c84ccfb18"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.834912 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" event={"ID":"3b8e79b9-abbe-4c80-9bce-5c046a75e2d9","Type":"ContainerStarted","Data":"76c230614fde1e34c28d3b3d65570b75e89c514897ba50993f001de8d524b06e"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.848139 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" event={"ID":"9d1c820f-f578-4833-a1b4-62e9cb304a2b","Type":"ContainerStarted","Data":"8bbb9f030a22dcee711b4ea4575a3871cded8a4b129f13c6db052218ae5fa9c2"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.848183 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" event={"ID":"9d1c820f-f578-4833-a1b4-62e9cb304a2b","Type":"ContainerStarted","Data":"69f57ab55384dc59c6006f6e7fd835c330270f2652c209064dff52afd1c0000b"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.863186 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" event={"ID":"3228209f-261f-431c-8552-a83291d93d52","Type":"ContainerStarted","Data":"dd12195b8fceb1b9aad86a473d7b62dd5fc155a35882dea0878254ea4c2882fa"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.868618 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.872081 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" podStartSLOduration=122.872059686 podStartE2EDuration="2m2.872059686s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.862538156 +0000 UTC m=+144.644657690" watchObservedRunningTime="2026-02-18 19:40:59.872059686 +0000 UTC m=+144.654179210" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.873166 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" event={"ID":"a35b162c-6740-4f1d-bb18-3b8abbc84877","Type":"ContainerStarted","Data":"91df6b2b792cf35bc80e903fef5784f297c96b41b602a9d430e9d42bff3587cb"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.873196 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" event={"ID":"a35b162c-6740-4f1d-bb18-3b8abbc84877","Type":"ContainerStarted","Data":"9e3b95074fa189450bf2b54acca45997ebe13d56ceb2a9afd6a706cc5ce77509"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.877488 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" event={"ID":"394f7cfe-7b2d-40d0-a780-fbbe5720101b","Type":"ContainerStarted","Data":"dbca5d76cb14a97c61ae32c3a6ddd846e38a3eee621ac90d95b3400273d2a030"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.878930 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5kxmr" event={"ID":"7afb27dc-b979-45ec-ae79-5595d0dedb6e","Type":"ContainerStarted","Data":"8a49c4e16cbc996c5e75e7361ab054ea4ed928c107a4f7004abac8d7ee8c42d2"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.881059 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" event={"ID":"5d7f3068-4f27-4e7c-b458-0fa999acbe60","Type":"ContainerStarted","Data":"4bf8fcaa1143f24465f2c113438a46fbc485b2143093678295f57172bd17cf2f"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.881488 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfxfg"] Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.882294 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" event={"ID":"9105100f-dce8-42a8-b182-945de577952d","Type":"ContainerStarted","Data":"e56a44288d9379654467a19e095e8bb4fc0a17e7a3e0a2f9f518dab17e267882"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.883494 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" event={"ID":"f99085de-37df-4176-94af-d62782e53e73","Type":"ContainerStarted","Data":"6baa349b791cd04c5f86f38434d17aa7816f5c363805848f17ee62c8e91c595f"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.887032 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" event={"ID":"d9e559f8-c226-49c4-9354-68241ac7fa62","Type":"ContainerStarted","Data":"49e29118de559ceca0b0aa6354842342dba3a9cc02901fa7205cf3d15a3f0d59"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.889581 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.890381 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.390367325 +0000 UTC m=+145.172486839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.892238 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" podStartSLOduration=122.89222503 podStartE2EDuration="2m2.89222503s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.889162049 +0000 UTC m=+144.671281573" watchObservedRunningTime="2026-02-18 19:40:59.89222503 +0000 UTC m=+144.674344554" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.900423 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8rhk6" event={"ID":"f52441fb-f37f-4482-8564-7e6527e2b16a","Type":"ContainerStarted","Data":"bf42ea60038eecc55de2d8ad1f0fee4e081d9a9bd4548ba8702e61976074c153"} Feb 18 19:40:59 crc kubenswrapper[5007]: W0218 19:40:59.911613 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode485d24d_a88f_49c4_83e3_1f3b9b03663a.slice/crio-eef8e71d635faa2238d9d876824421caf3b15a395e0d7f38b5be652cf2847688 WatchSource:0}: Error finding container eef8e71d635faa2238d9d876824421caf3b15a395e0d7f38b5be652cf2847688: Status 404 returned error can't find the container with id eef8e71d635faa2238d9d876824421caf3b15a395e0d7f38b5be652cf2847688 Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.919176 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" podStartSLOduration=122.919157772 podStartE2EDuration="2m2.919157772s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.914723862 +0000 UTC m=+144.696843406" watchObservedRunningTime="2026-02-18 19:40:59.919157772 +0000 UTC m=+144.701277296" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.964388 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" event={"ID":"d23074ca-8045-4137-aa07-78b59df3dbc9","Type":"ContainerStarted","Data":"3aa6a57eeb30ec63666cd4f9f755141e9ca49a2581cd25c957a72bafc8656923"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.964882 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" event={"ID":"d23074ca-8045-4137-aa07-78b59df3dbc9","Type":"ContainerStarted","Data":"c164ea74438257e53c4822db299791ff1e044bd7314dcb701245bbfb2e8e5ac0"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.964965 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" event={"ID":"ba240294-0bf3-46a0-8fa5-7d2029e3fd56","Type":"ContainerStarted","Data":"7d8ad2a1093b4948a08ef3f01af9af4b1f0e1e9a7641a35dcf41c1db74954401"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.965035 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" event={"ID":"251e8541-289e-4085-9d90-5621ea85ca04","Type":"ContainerStarted","Data":"24d1798e686afe7b8db5f45ab9910115f004cc1c0786831afe42f6c0ff60a20a"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.970004 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" event={"ID":"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5","Type":"ContainerStarted","Data":"33d61616b4dfb01a069e5788d817c40604e756a9d48e589c4790f51a9634342f"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.974824 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" event={"ID":"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1","Type":"ContainerStarted","Data":"8b351316cb5b8cf0fccf709f4263e5c92f1e1cbf1799b84edd164a9523dc18da"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.978411 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" event={"ID":"8937e868-388c-45cd-9260-972f5fd599e7","Type":"ContainerStarted","Data":"25da53a1e76f479d3a64cbc6a59a699352748faad930f24f20c67d1ea0a705d2"} Feb 18 19:40:59 crc kubenswrapper[5007]: W0218 19:40:59.979083 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7e8415f_6c51_452d_b0dc_13edd5475f6d.slice/crio-fe0e64b078f547a81a29c455217902be41377877fe7ad93218ed2db497e6c2e3 WatchSource:0}: Error finding container fe0e64b078f547a81a29c455217902be41377877fe7ad93218ed2db497e6c2e3: Status 404 returned error can't find the container with id fe0e64b078f547a81a29c455217902be41377877fe7ad93218ed2db497e6c2e3 Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.982491 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" event={"ID":"5dbbd3e2-4581-4ca2-9180-3dd62529478e","Type":"ContainerStarted","Data":"57cb6ee89a36d5635317cc9e6ae79a5f6c833a991aaa59ea41cefa2f30f2d434"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.982533 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" event={"ID":"5dbbd3e2-4581-4ca2-9180-3dd62529478e","Type":"ContainerStarted","Data":"d2de6bb308f7d75ade9a5bc8fa500b3f760d6c95a776979a50ee9eafdb4a6713"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.988066 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pg4xw" event={"ID":"95a8b9d6-c7e2-4654-b113-fa4db3836199","Type":"ContainerStarted","Data":"120944b8b9aec1e8d646e438133a935ac5312a32c0e31b0fabd71e31ab0ce7b1"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.991896 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:40:59 crc kubenswrapper[5007]: E0218 19:40:59.992771 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.492731548 +0000 UTC m=+145.274851072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.997301 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" podStartSLOduration=122.997289342 podStartE2EDuration="2m2.997289342s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.996070676 +0000 UTC m=+144.778190210" watchObservedRunningTime="2026-02-18 19:40:59.997289342 +0000 UTC m=+144.779408866" Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.997460 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" event={"ID":"b4904a98-07ab-4aab-b7e5-fbcab02bf70d","Type":"ContainerStarted","Data":"1277e25472801d6b68aa961f2a7e3475b005d6a76ca748ce992e074f77440841"} Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.998007 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:40:59 crc kubenswrapper[5007]: I0218 19:40:59.998045 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.001225 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tfx5l" podStartSLOduration=123.001217018 podStartE2EDuration="2m3.001217018s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.965259229 +0000 UTC m=+144.747378763" watchObservedRunningTime="2026-02-18 19:41:00.001217018 +0000 UTC m=+144.783336542" Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.006226 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.011761 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.084295 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pg4xw" podStartSLOduration=5.084278593 podStartE2EDuration="5.084278593s" podCreationTimestamp="2026-02-18 19:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:00.083599663 +0000 UTC m=+144.865719207" watchObservedRunningTime="2026-02-18 19:41:00.084278593 +0000 UTC m=+144.866398117" Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.085004 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mwkdf" podStartSLOduration=123.084994344 podStartE2EDuration="2m3.084994344s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:00.055168976 +0000 UTC m=+144.837288500" watchObservedRunningTime="2026-02-18 19:41:00.084994344 +0000 UTC m=+144.867113868" Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.094009 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.098013 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.597994436 +0000 UTC m=+145.380113990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.210533 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.210938 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.71091881 +0000 UTC m=+145.493038374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.311857 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.312015 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.811988425 +0000 UTC m=+145.594107949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.312374 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.318168 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.818155997 +0000 UTC m=+145.600275531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.421199 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.421750 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:00.921724435 +0000 UTC m=+145.703843959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.524798 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.527662 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.027629823 +0000 UTC m=+145.809749347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.616700 5007 csr.go:261] certificate signing request csr-d4c84 is approved, waiting to be issued Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.624935 5007 csr.go:257] certificate signing request csr-d4c84 is issued Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.625995 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.626082 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.12606509 +0000 UTC m=+145.908184614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.626337 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.626814 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.126798402 +0000 UTC m=+145.908917926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.631027 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:00 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:00 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:00 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.631096 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.727642 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.727930 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.227909928 +0000 UTC m=+146.010029452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.728144 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.728484 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.228472435 +0000 UTC m=+146.010591959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.836542 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.837338 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.337322908 +0000 UTC m=+146.119442422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:00 crc kubenswrapper[5007]: I0218 19:41:00.939587 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:00 crc kubenswrapper[5007]: E0218 19:41:00.939922 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.439910578 +0000 UTC m=+146.222030102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.043657 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.043867 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.543844617 +0000 UTC m=+146.325964141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.044659 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.044987 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.54497489 +0000 UTC m=+146.327094414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.146004 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.146533 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.646514969 +0000 UTC m=+146.428634493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.148122 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" event={"ID":"d2ded945-4867-4a9b-bc0d-0643f04decbe","Type":"ContainerStarted","Data":"5fef522983765603da38542d4a776715d650da1777076d5916f95233ed714c7c"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.158784 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8r6kd" event={"ID":"3228209f-261f-431c-8552-a83291d93d52","Type":"ContainerStarted","Data":"aa6d09959a98faa418b2a5fa353f8bf3f1dde912ffe827fffba361d0ae76b9a0"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.165154 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" event={"ID":"0c0f693c-c46e-4c30-af8b-f962c3362e2a","Type":"ContainerStarted","Data":"72d03483603a25d087bcc8e333ca73f9a29fabe1e700a39ce462a6463d1383a0"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.166378 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" event={"ID":"a0457430-d92a-464b-8cc7-e36883e9c718","Type":"ContainerStarted","Data":"612909ba4d85f5ea7eb5188d13024db63fb3f616eea6b161e066ae504db190c4"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.167967 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5kxmr" event={"ID":"7afb27dc-b979-45ec-ae79-5595d0dedb6e","Type":"ContainerStarted","Data":"55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.175085 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5hb2m" event={"ID":"5d7f3068-4f27-4e7c-b458-0fa999acbe60","Type":"ContainerStarted","Data":"e66f83dee817392fbde2b6ec852a18c9ae0d1df8e4518a0cf5a0d78bb46300f7"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.187544 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9x6zp" podStartSLOduration=124.187531067 podStartE2EDuration="2m4.187531067s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.186147016 +0000 UTC m=+145.968266540" watchObservedRunningTime="2026-02-18 19:41:01.187531067 +0000 UTC m=+145.969650591" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.196158 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" event={"ID":"0b8a38a2-828b-4d45-989f-f3b42f5c7ef5","Type":"ContainerStarted","Data":"dcfa84d1def5273b1b1b548f5d9430ee518b7783ac14454c02920900fcb40fbc"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.201302 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" event={"ID":"cee7777d-8cb6-4c90-a650-4da91faaf021","Type":"ContainerStarted","Data":"3beaa05c4fc90c7df5c743baca7101ba63cccef069a8080ea0cbaffc4246c1f2"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.202253 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" event={"ID":"075a793c-9e04-43eb-80ee-b0c10d727b22","Type":"ContainerStarted","Data":"d44e68b97bf0f066a490160de7b74b286dcfe7b1fb26ffa19388248666337393"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.202277 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" event={"ID":"075a793c-9e04-43eb-80ee-b0c10d727b22","Type":"ContainerStarted","Data":"1b7a8c028ce7e4740a0137b68752048f487ad181fe61307562d4d0004983edad"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.204013 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x7tb" event={"ID":"f99085de-37df-4176-94af-d62782e53e73","Type":"ContainerStarted","Data":"1905a40c87f3afc2fc1f6865bb2a203cec72c3cc1d8cc9417e4eb55dbb95708d"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.214886 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" event={"ID":"9105100f-dce8-42a8-b182-945de577952d","Type":"ContainerStarted","Data":"2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.217306 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.217617 5007 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-b2xj7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.217674 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" podUID="9105100f-dce8-42a8-b182-945de577952d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.247339 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5kxmr" podStartSLOduration=124.247315286 podStartE2EDuration="2m4.247315286s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.246611365 +0000 UTC m=+146.028730899" watchObservedRunningTime="2026-02-18 19:41:01.247315286 +0000 UTC m=+146.029434810" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.247770 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.250193 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" event={"ID":"d9e559f8-c226-49c4-9354-68241ac7fa62","Type":"ContainerStarted","Data":"6c0141a0d53c780ac27e33eaf676a1bdf7aa83ecdf307c40af7bee07427f0457"} Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.250253 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.750233642 +0000 UTC m=+146.532353156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.250283 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" event={"ID":"d9e559f8-c226-49c4-9354-68241ac7fa62","Type":"ContainerStarted","Data":"68e4b29da2cdc921d26e7fa7a8d13bdc199098539ca6ae4061d9893e16bd1a4a"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.268761 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z6vfc" event={"ID":"c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b","Type":"ContainerStarted","Data":"6aa6a1880cdbd498b851a2fc15c9b90ab882a794829ae1c549d5c82806355e38"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.284912 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-d6vc6" podStartSLOduration=124.284890862 podStartE2EDuration="2m4.284890862s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.283206443 +0000 UTC m=+146.065325997" watchObservedRunningTime="2026-02-18 19:41:01.284890862 +0000 UTC m=+146.067010386" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.286400 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" event={"ID":"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7","Type":"ContainerStarted","Data":"79fe1e2e705817fb63059d66acac31571aae9e74eaf8af1c30c06af20b2dc142"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.286496 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" event={"ID":"eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7","Type":"ContainerStarted","Data":"8907167e51f75922992f874ce68efb3b3542fe09aa84059841535b09f87b91fe"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.286902 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.298654 5007 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pgvrb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.298723 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" podUID="eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.321485 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xjcf2" podStartSLOduration=124.321469469 podStartE2EDuration="2m4.321469469s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.321145779 +0000 UTC m=+146.103265323" watchObservedRunningTime="2026-02-18 19:41:01.321469469 +0000 UTC m=+146.103588993" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.339583 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" event={"ID":"67928300-0460-40b5-bc58-e286187b315e","Type":"ContainerStarted","Data":"5dc2214777ce6751cd5eb0c96988a08b6cfa4fbf87cf633eb6424304ebdfedf8"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.348835 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.350423 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.85037708 +0000 UTC m=+146.632496604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.358874 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-clz74" event={"ID":"c17be73d-c271-43eb-b4c6-425ead314225","Type":"ContainerStarted","Data":"e4480280d92f08f21d2f8c7408019a489ed6efde1c661e38e9af6e891bba8321"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.399117 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" podStartSLOduration=124.399087934 podStartE2EDuration="2m4.399087934s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.397658431 +0000 UTC m=+146.179777975" watchObservedRunningTime="2026-02-18 19:41:01.399087934 +0000 UTC m=+146.181207458" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.421817 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" event={"ID":"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1","Type":"ContainerStarted","Data":"1cc2a56a655476f07462610b59b42745acbb6e042795be9902a2d4fb10a3f945"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.452879 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.453347 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:01.95333244 +0000 UTC m=+146.735451974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.470865 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" podStartSLOduration=124.470843846 podStartE2EDuration="2m4.470843846s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.459946065 +0000 UTC m=+146.242065589" watchObservedRunningTime="2026-02-18 19:41:01.470843846 +0000 UTC m=+146.252963370" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.497105 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" event={"ID":"9d1c820f-f578-4833-a1b4-62e9cb304a2b","Type":"ContainerStarted","Data":"ca124944f45dcb3b82a9cb74a5f0b50dcefedf841d7737a948d832506712c380"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.554658 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.556165 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lc6l2" podStartSLOduration=124.556139016 podStartE2EDuration="2m4.556139016s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.545340388 +0000 UTC m=+146.327459922" watchObservedRunningTime="2026-02-18 19:41:01.556139016 +0000 UTC m=+146.338258540" Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.556255 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.056236509 +0000 UTC m=+146.838356033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.579604 5007 generic.go:334] "Generic (PLEG): container finished" podID="5dbbd3e2-4581-4ca2-9180-3dd62529478e" containerID="57cb6ee89a36d5635317cc9e6ae79a5f6c833a991aaa59ea41cefa2f30f2d434" exitCode=0 Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.579752 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" event={"ID":"5dbbd3e2-4581-4ca2-9180-3dd62529478e","Type":"ContainerDied","Data":"57cb6ee89a36d5635317cc9e6ae79a5f6c833a991aaa59ea41cefa2f30f2d434"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.586419 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pbmcj" podStartSLOduration=124.586382686 podStartE2EDuration="2m4.586382686s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.49442345 +0000 UTC m=+146.276542974" watchObservedRunningTime="2026-02-18 19:41:01.586382686 +0000 UTC m=+146.368502210" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.619714 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" event={"ID":"b6d0e889-a325-491c-bfda-b5d2090f4f97","Type":"ContainerStarted","Data":"86d574d27e974b817e404d86bce5782f08856300dafe0901d2f571917cf4660c"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.654626 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 19:36:00 +0000 UTC, rotation deadline is 2027-01-08 21:22:44.710148821 +0000 UTC Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.654672 5007 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7777h41m43.055479535s for next certificate rotation Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.656684 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.656904 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.156895032 +0000 UTC m=+146.939014556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.663921 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:01 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:01 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:01 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.663988 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.683281 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-msmb4" podStartSLOduration=124.683255048 podStartE2EDuration="2m4.683255048s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.683120864 +0000 UTC m=+146.465240398" watchObservedRunningTime="2026-02-18 19:41:01.683255048 +0000 UTC m=+146.465374572" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.683364 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" event={"ID":"b4904a98-07ab-4aab-b7e5-fbcab02bf70d","Type":"ContainerStarted","Data":"8b9183e5f0cbe25a35ec4500e4008bcf542ea56f379ffd3a5bfccd1745bf8c71"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.724066 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" event={"ID":"251e8541-289e-4085-9d90-5621ea85ca04","Type":"ContainerStarted","Data":"cdcc573dc3c98c9bd7ed648b68e5cbea4c391c58d271aaa0d1aa0d0df5bcd3e9"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.724178 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" event={"ID":"251e8541-289e-4085-9d90-5621ea85ca04","Type":"ContainerStarted","Data":"7e637c9434ac2542a9fc11bf5d4ed42052fffd29f1f0e8fb0267c2859c3aeb89"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.739205 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4pd8w" podStartSLOduration=124.739188044 podStartE2EDuration="2m4.739188044s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.738181395 +0000 UTC m=+146.520300919" watchObservedRunningTime="2026-02-18 19:41:01.739188044 +0000 UTC m=+146.521307568" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.755674 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" event={"ID":"e7e8415f-6c51-452d-b0dc-13edd5475f6d","Type":"ContainerStarted","Data":"fe0e64b078f547a81a29c455217902be41377877fe7ad93218ed2db497e6c2e3"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.758313 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.758406 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.258389189 +0000 UTC m=+147.040508713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.759742 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.762009 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.261985385 +0000 UTC m=+147.044105119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.778837 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" event={"ID":"5accf517-2cda-44c4-b3ce-f57e02ec9fc5","Type":"ContainerStarted","Data":"6ff17a698ff80e669ec049fce58861bfdee90075918e0bc10200c57f90194d48"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.779507 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.781420 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" event={"ID":"e485d24d-a88f-49c4-83e3-1f3b9b03663a","Type":"ContainerStarted","Data":"eef8e71d635faa2238d9d876824421caf3b15a395e0d7f38b5be652cf2847688"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.811310 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" event={"ID":"a35b162c-6740-4f1d-bb18-3b8abbc84877","Type":"ContainerStarted","Data":"5898c4bb3ea266f96a84894c82bd69fb12ca485a5f7bd2b341c62c61e11e6200"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.832195 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" event={"ID":"370cd8fe-7bde-4e36-b8bc-dca582137d44","Type":"ContainerStarted","Data":"8f925a051489d04e66e2fe4e88c59b74fde118e0beae3a7f0b4fcf4152c116c4"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.832235 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" event={"ID":"370cd8fe-7bde-4e36-b8bc-dca582137d44","Type":"ContainerStarted","Data":"b1a451967e7b7a5b706b502381fda7e54d500b2d6e0e894c02e5f431d3fa1191"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.849519 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-j5hn8" podStartSLOduration=124.849500431 podStartE2EDuration="2m4.849500431s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.778725278 +0000 UTC m=+146.560844822" watchObservedRunningTime="2026-02-18 19:41:01.849500431 +0000 UTC m=+146.631619965" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.855932 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tw86n" event={"ID":"b1fd9d95-9876-43a9-b061-b5ef6bb15741","Type":"ContainerStarted","Data":"303e1469d104bbdd3948a540cb8c4c9627bfbcd71612278dd06ec6906d934dee"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.856005 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tw86n" event={"ID":"b1fd9d95-9876-43a9-b061-b5ef6bb15741","Type":"ContainerStarted","Data":"c469b2c55ed2394aec18e533a04ca68dec709227e893cb3c674e11d26e8f453c"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.857147 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.861804 5007 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw86n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.861847 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw86n" podUID="b1fd9d95-9876-43a9-b061-b5ef6bb15741" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.862731 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.873385 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.373343653 +0000 UTC m=+147.155463267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.873552 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" event={"ID":"8937e868-388c-45cd-9260-972f5fd599e7","Type":"ContainerStarted","Data":"a814430e45d8d6fa11da341127628be1e96bcb01fe488f439e11278dbe3a765a"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.889214 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" event={"ID":"ba240294-0bf3-46a0-8fa5-7d2029e3fd56","Type":"ContainerStarted","Data":"a4c48da5fb914768b39d676c6adc08194da6665f94f331eab5e039bc99957b6d"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.902945 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" podStartSLOduration=124.902926874 podStartE2EDuration="2m4.902926874s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.850985225 +0000 UTC m=+146.633104749" watchObservedRunningTime="2026-02-18 19:41:01.902926874 +0000 UTC m=+146.685046398" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.917508 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.917778 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.952461 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-82445" event={"ID":"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29","Type":"ContainerStarted","Data":"ccff5db09b036d316ae299f64b5c989401215ccfb4a0ffc969b66d6756d7b922"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.952525 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-82445" event={"ID":"2ad69e00-fce0-4a6f-b74d-2bc9bc9e7b29","Type":"ContainerStarted","Data":"c23cc6927b6f09bc2d5fc191d92ff340285253dc9d39456a7a2d678350b0d0c5"} Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.967736 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" podStartSLOduration=124.967719011 podStartE2EDuration="2m4.967719011s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.966944158 +0000 UTC m=+146.749063702" watchObservedRunningTime="2026-02-18 19:41:01.967719011 +0000 UTC m=+146.749838535" Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.967853 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:01 crc kubenswrapper[5007]: E0218 19:41:01.968281 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.468267697 +0000 UTC m=+147.250387221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:01 crc kubenswrapper[5007]: I0218 19:41:01.968513 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" podStartSLOduration=124.968508974 podStartE2EDuration="2m4.968508974s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:01.904989114 +0000 UTC m=+146.687108638" watchObservedRunningTime="2026-02-18 19:41:01.968508974 +0000 UTC m=+146.750628488" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.038064 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7f9v5" podStartSLOduration=125.038046501 podStartE2EDuration="2m5.038046501s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:02.002891416 +0000 UTC m=+146.785010950" watchObservedRunningTime="2026-02-18 19:41:02.038046501 +0000 UTC m=+146.820166025" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.040576 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tw86n" podStartSLOduration=125.040564885 podStartE2EDuration="2m5.040564885s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:02.028572842 +0000 UTC m=+146.810692376" watchObservedRunningTime="2026-02-18 19:41:02.040564885 +0000 UTC m=+146.822684409" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.063937 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.064515 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.070057 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.071394 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.571378342 +0000 UTC m=+147.353497856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.116521 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lb5x9" podStartSLOduration=125.11649559 podStartE2EDuration="2m5.11649559s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:02.115830701 +0000 UTC m=+146.897950225" watchObservedRunningTime="2026-02-18 19:41:02.11649559 +0000 UTC m=+146.898615114" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.117567 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cklm5" podStartSLOduration=125.117560062 podStartE2EDuration="2m5.117560062s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:02.071602969 +0000 UTC m=+146.853722493" watchObservedRunningTime="2026-02-18 19:41:02.117560062 +0000 UTC m=+146.899679586" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.172603 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.172791 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-82445" podStartSLOduration=125.172767347 podStartE2EDuration="2m5.172767347s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:02.171809778 +0000 UTC m=+146.953929302" watchObservedRunningTime="2026-02-18 19:41:02.172767347 +0000 UTC m=+146.954886871" Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.173256 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.673240411 +0000 UTC m=+147.455359935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.274026 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.274700 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.774631024 +0000 UTC m=+147.556750548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.275055 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.275613 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.775590032 +0000 UTC m=+147.557709556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.376122 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.376537 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.876500082 +0000 UTC m=+147.658619756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.477814 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.478510 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:02.978409782 +0000 UTC m=+147.760529306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.578796 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.579071 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.079054084 +0000 UTC m=+147.861173608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.630020 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:02 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:02 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:02 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.630141 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.680649 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.681175 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.18114844 +0000 UTC m=+147.963268154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.734398 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.781519 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.781906 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.281873164 +0000 UTC m=+148.063992688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.883970 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.884411 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.384395132 +0000 UTC m=+148.166514656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.937625 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" event={"ID":"e7e8415f-6c51-452d-b0dc-13edd5475f6d","Type":"ContainerStarted","Data":"8888ea3b6a01fc2fc18fe155d9a37858fd327ea972185945b11ff020292178a5"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.938679 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.945917 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" event={"ID":"5accf517-2cda-44c4-b3ce-f57e02ec9fc5","Type":"ContainerStarted","Data":"26b41ae7005b6b4d54f2fffd2c92aab45f5e88ce3b9c682ffa6675aaa3b3de76"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.946080 5007 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tfxfg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.946142 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" podUID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.956617 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xms9g" event={"ID":"e485d24d-a88f-49c4-83e3-1f3b9b03663a","Type":"ContainerStarted","Data":"5627e681c45fbd0505114dd3e04bf5031943775bd1ae3637ec8bd5741903c251"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.959169 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8rhk6" event={"ID":"f52441fb-f37f-4482-8564-7e6527e2b16a","Type":"ContainerStarted","Data":"36589924eb74aefae4b2f693e75bab27889e782a13609b38a913c718cf396f66"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.959199 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8rhk6" event={"ID":"f52441fb-f37f-4482-8564-7e6527e2b16a","Type":"ContainerStarted","Data":"c23568d9d2704f2d67e553e3f8cef8eac30cbcf2bb35ba9ef41137d39f69dc29"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.959300 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8rhk6" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.966387 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" event={"ID":"cee7777d-8cb6-4c90-a650-4da91faaf021","Type":"ContainerStarted","Data":"79f97345ee6be97242fa4ee8a5ca939f6385c0be62bfc43daee370daefbc67dd"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.971380 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" event={"ID":"5fe6176b-89ff-4479-b3b3-f9a0e4cda0c1","Type":"ContainerStarted","Data":"a04fac95467fc95cda5dc1b7fb28d98d82749492a12573f4f8ffc9fc1ed2185b"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.972907 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" event={"ID":"a0457430-d92a-464b-8cc7-e36883e9c718","Type":"ContainerStarted","Data":"18fef63edf229710be9e8540ac14e40ac2f0bbb76a9fab0aba57e2d1a7de9c3c"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.973147 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.974813 5007 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-sz9kn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.974858 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" podUID="a0457430-d92a-464b-8cc7-e36883e9c718" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.981746 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z6vfc" event={"ID":"c4cfc724-b4d0-4f70-bbb1-1bf8fec0228b","Type":"ContainerStarted","Data":"dabffd407ed3cb884f2a307908c1727e0e2c1844255194f5c4c09b81ed405851"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.983687 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" event={"ID":"67928300-0460-40b5-bc58-e286187b315e","Type":"ContainerStarted","Data":"1eefd967be38ad5b84b83014c3c5f79a8c5a6f2addcee84ed7b6e78e7f5a02dd"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.984284 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.984595 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:02 crc kubenswrapper[5007]: E0218 19:41:02.985077 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.485062535 +0000 UTC m=+148.267182059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.989090 5007 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z6x8g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.989125 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" podUID="67928300-0460-40b5-bc58-e286187b315e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.994400 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" event={"ID":"5dbbd3e2-4581-4ca2-9180-3dd62529478e","Type":"ContainerStarted","Data":"b157e2bb19801f2cd103e3921d3a65ef9daef49d1c57df2d94891e8a31cfe665"} Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.995004 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:41:02 crc kubenswrapper[5007]: I0218 19:41:02.997309 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" event={"ID":"0c0f693c-c46e-4c30-af8b-f962c3362e2a","Type":"ContainerStarted","Data":"b88dbf1d4b8657e7adf53516498658f054fe01911e9ec14f1b3152e244a5034e"} Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.007756 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-clz74" event={"ID":"c17be73d-c271-43eb-b4c6-425ead314225","Type":"ContainerStarted","Data":"94735d6234e9abed3cb471307cebd3e3cd74759ec58c379592918ce7ffd519e8"} Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.008471 5007 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw86n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.008513 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw86n" podUID="b1fd9d95-9876-43a9-b061-b5ef6bb15741" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.008784 5007 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pgvrb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.008810 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" podUID="eac95d1e-a170-4e42-a1a8-a6fe21d6c5d7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.028818 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mh6nh" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.085806 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.088491 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.588471389 +0000 UTC m=+148.370590993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.142818 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" podStartSLOduration=126.142803768 podStartE2EDuration="2m6.142803768s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.123874751 +0000 UTC m=+147.905994275" watchObservedRunningTime="2026-02-18 19:41:03.142803768 +0000 UTC m=+147.924923282" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.186794 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.188105 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.688085091 +0000 UTC m=+148.470204615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.290029 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.290457 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.790440334 +0000 UTC m=+148.572559868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.391639 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.391855 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.891803448 +0000 UTC m=+148.673922982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.392227 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.392583 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.89257187 +0000 UTC m=+148.674691394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.435542 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kfmgt" podStartSLOduration=126.435522924 podStartE2EDuration="2m6.435522924s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.434791913 +0000 UTC m=+148.216911447" watchObservedRunningTime="2026-02-18 19:41:03.435522924 +0000 UTC m=+148.217642448" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.436103 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" podStartSLOduration=126.436096271 podStartE2EDuration="2m6.436096271s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.374971932 +0000 UTC m=+148.157091456" watchObservedRunningTime="2026-02-18 19:41:03.436096271 +0000 UTC m=+148.218215795" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.493791 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.494129 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:03.994115309 +0000 UTC m=+148.776234823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.543369 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z6vfc" podStartSLOduration=9.543354388000001 podStartE2EDuration="9.543354388s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.54171773 +0000 UTC m=+148.323837274" watchObservedRunningTime="2026-02-18 19:41:03.543354388 +0000 UTC m=+148.325473912" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.545156 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8rhk6" podStartSLOduration=9.545147791 podStartE2EDuration="9.545147791s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.466045053 +0000 UTC m=+148.248164577" watchObservedRunningTime="2026-02-18 19:41:03.545147791 +0000 UTC m=+148.327267315" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.595477 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.595776 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.095763621 +0000 UTC m=+148.877883145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.631159 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:03 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:03 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:03 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.631234 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.668059 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" podStartSLOduration=126.668040518 podStartE2EDuration="2m6.668040518s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.63073584 +0000 UTC m=+148.412855384" watchObservedRunningTime="2026-02-18 19:41:03.668040518 +0000 UTC m=+148.450160042" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.669174 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" podStartSLOduration=126.669167492 podStartE2EDuration="2m6.669167492s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.667352238 +0000 UTC m=+148.449471762" watchObservedRunningTime="2026-02-18 19:41:03.669167492 +0000 UTC m=+148.451287016" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.696925 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.697051 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.197024641 +0000 UTC m=+148.979144165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.697269 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.697604 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.197595668 +0000 UTC m=+148.979715192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.792777 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r28hj" podStartSLOduration=126.792749639 podStartE2EDuration="2m6.792749639s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.721717408 +0000 UTC m=+148.503836922" watchObservedRunningTime="2026-02-18 19:41:03.792749639 +0000 UTC m=+148.574869163" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.794121 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" podStartSLOduration=126.794112759 podStartE2EDuration="2m6.794112759s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:03.780142718 +0000 UTC m=+148.562262262" watchObservedRunningTime="2026-02-18 19:41:03.794112759 +0000 UTC m=+148.576232283" Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.799387 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.799517 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.299495688 +0000 UTC m=+149.081615212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.799919 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.800340 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.300330612 +0000 UTC m=+149.082450136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.903188 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.906095 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.406064584 +0000 UTC m=+149.188184108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:03 crc kubenswrapper[5007]: I0218 19:41:03.908542 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:03 crc kubenswrapper[5007]: E0218 19:41:03.909026 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.409012751 +0000 UTC m=+149.191132275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.010653 5007 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-b2xj7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.011324 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" podUID="9105100f-dce8-42a8-b182-945de577952d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.011013 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.510996943 +0000 UTC m=+149.293116467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.010950 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.011815 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.012344 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.512326042 +0000 UTC m=+149.294445566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.049286 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-clz74" event={"ID":"c17be73d-c271-43eb-b4c6-425ead314225","Type":"ContainerStarted","Data":"1645ff063ae57fba9c9b266f18dab0d23c877721efd9685d01a3e44d4ae13179"} Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.049783 5007 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z6x8g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.049904 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" podUID="67928300-0460-40b5-bc58-e286187b315e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.054413 5007 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw86n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.054478 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw86n" podUID="b1fd9d95-9876-43a9-b061-b5ef6bb15741" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.054495 5007 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tfxfg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.054550 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" podUID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.055726 5007 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m68bj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.055753 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" podUID="5dbbd3e2-4581-4ca2-9180-3dd62529478e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.078686 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sz9kn" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.113033 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.114121 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.613126199 +0000 UTC m=+149.395245723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.114782 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.121953 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.621933089 +0000 UTC m=+149.404052613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.220352 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.220680 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.220717 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.220754 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.220802 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.224543 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.224983 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.724955131 +0000 UTC m=+149.507074655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.226069 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.243188 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.248995 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.272420 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.324130 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.325945 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.825931313 +0000 UTC m=+149.608050837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.333716 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.351410 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.360877 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.434084 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.434298 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.934272432 +0000 UTC m=+149.716391956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.434380 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.434747 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:04.934731246 +0000 UTC m=+149.716850770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.537073 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.537479 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.03745739 +0000 UTC m=+149.819576914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.628586 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:04 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:04 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:04 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.628955 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.639120 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.639524 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.139505563 +0000 UTC m=+149.921625087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.740371 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.740673 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.240655671 +0000 UTC m=+150.022775195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.844864 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.845301 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.34527882 +0000 UTC m=+150.127398414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.946607 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:04 crc kubenswrapper[5007]: E0218 19:41:04.947227 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.447211831 +0000 UTC m=+150.229331355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.990398 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.991146 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.993935 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 19:41:04 crc kubenswrapper[5007]: I0218 19:41:04.995871 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.049164 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.049233 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.049254 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.049555 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.549541983 +0000 UTC m=+150.331661497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.083461 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.132834 5007 generic.go:334] "Generic (PLEG): container finished" podID="370cd8fe-7bde-4e36-b8bc-dca582137d44" containerID="8f925a051489d04e66e2fe4e88c59b74fde118e0beae3a7f0b4fcf4152c116c4" exitCode=0 Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.132978 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" event={"ID":"370cd8fe-7bde-4e36-b8bc-dca582137d44","Type":"ContainerDied","Data":"8f925a051489d04e66e2fe4e88c59b74fde118e0beae3a7f0b4fcf4152c116c4"} Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.148835 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-clz74" event={"ID":"c17be73d-c271-43eb-b4c6-425ead314225","Type":"ContainerStarted","Data":"9be476a745a5c41f2af9f4180781ad75891b595e9665f114477b028fb7fd2d28"} Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.149674 5007 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tfxfg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.149761 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" podUID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.159833 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.160091 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.160122 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.160249 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.160317 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.660302993 +0000 UTC m=+150.442422517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.170182 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z6x8g" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.197801 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.281636 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.284576 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.78456019 +0000 UTC m=+150.566679714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.385784 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.386154 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.88613704 +0000 UTC m=+150.668256564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.398822 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.487289 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.487794 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:05.987779682 +0000 UTC m=+150.769899206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.589346 5007 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.590925 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.591390 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.091364401 +0000 UTC m=+150.873483925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.631586 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:05 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:05 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:05 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.631643 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.697199 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.697666 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.19765323 +0000 UTC m=+150.979772754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.803253 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.803437 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.303400511 +0000 UTC m=+151.085520035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.803703 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.803984 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.303976898 +0000 UTC m=+151.086096412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.905160 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.905389 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.405370323 +0000 UTC m=+151.187489847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:05 crc kubenswrapper[5007]: I0218 19:41:05.905714 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:05 crc kubenswrapper[5007]: E0218 19:41:05.906047 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.406039552 +0000 UTC m=+151.188159076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.009048 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:06 crc kubenswrapper[5007]: E0218 19:41:06.009565 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.509538499 +0000 UTC m=+151.291658023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.045252 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m68bj" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.110117 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:06 crc kubenswrapper[5007]: E0218 19:41:06.110715 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.610702166 +0000 UTC m=+151.392821690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.167072 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"92d579130d80b8cfcb6e1cc72e434ce68f37e7a354ff82763a634f395f21940a"} Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.167145 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c0734bfbf183ab8bd0b93e8a5e859959dfe2780c520704c8cd98d3e384c1a33e"} Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.176122 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5e37ca54e9eda8e9e2652568cfc760240a514ebc9a86640e2260589a576f5038"} Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.176173 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fc3075a779476024fef8cd2d42b696518335e898662a9f101db4f9287ff0dca9"} Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.176636 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.186535 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-clz74" event={"ID":"c17be73d-c271-43eb-b4c6-425ead314225","Type":"ContainerStarted","Data":"d9aa046800d0432f9940149cc76dfce2bb0bebc40ed42189a54655ace5664c26"} Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.193412 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8b67871fcd8441a9510f3a976ced8f6097c68716fb2a7b4111c0a54ac7805a73"} Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.193575 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"82b58d8d300811dfcc6a76ddc6c3c03a7ca672dc942caa01ff69f8fef92d85ec"} Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.214037 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:06 crc kubenswrapper[5007]: E0218 19:41:06.214247 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.714201973 +0000 UTC m=+151.496321497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.215254 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:06 crc kubenswrapper[5007]: E0218 19:41:06.216811 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:41:06.716795869 +0000 UTC m=+151.498915593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4vmwt" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.246901 5007 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T19:41:05.589611579Z","Handler":null,"Name":""} Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.248258 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z8ph5"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.249387 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.260742 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.273348 5007 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.273409 5007 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.277948 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z8ph5"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.296595 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.303128 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-clz74" podStartSLOduration=12.3030996 podStartE2EDuration="12.3030996s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:06.27999791 +0000 UTC m=+151.062117444" watchObservedRunningTime="2026-02-18 19:41:06.3030996 +0000 UTC m=+151.085219124" Feb 18 19:41:06 crc kubenswrapper[5007]: W0218 19:41:06.307284 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd5fc694f_6f70_4bf7_a2ac_8e6920d0a028.slice/crio-d9ac36e5e41a3d1af2a23f38007559c0245520087297f4d9233cc4920b0041dc WatchSource:0}: Error finding container d9ac36e5e41a3d1af2a23f38007559c0245520087297f4d9233cc4920b0041dc: Status 404 returned error can't find the container with id d9ac36e5e41a3d1af2a23f38007559c0245520087297f4d9233cc4920b0041dc Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.316316 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.316469 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-catalog-content\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.316546 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhg4t\" (UniqueName: \"kubernetes.io/projected/84dca2c7-0ed9-4824-8431-88a26864b81c-kube-api-access-zhg4t\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.316570 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-utilities\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.328805 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.419154 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-catalog-content\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.419245 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.419371 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhg4t\" (UniqueName: \"kubernetes.io/projected/84dca2c7-0ed9-4824-8431-88a26864b81c-kube-api-access-zhg4t\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.419402 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-utilities\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.420086 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-utilities\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.420479 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-catalog-content\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.471844 5007 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.471911 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.517501 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k4wz9"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.543621 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhg4t\" (UniqueName: \"kubernetes.io/projected/84dca2c7-0ed9-4824-8431-88a26864b81c-kube-api-access-zhg4t\") pod \"community-operators-z8ph5\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.582176 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.593957 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4wz9"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.595109 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.618103 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.662734 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:06 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:06 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:06 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.662825 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.698056 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-smq9b"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.705332 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.705721 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smq9b"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.740746 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2n2\" (UniqueName: \"kubernetes.io/projected/8c778c82-e9ab-40e7-b084-07195587a975-kube-api-access-qm2n2\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.740793 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-catalog-content\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.740823 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-utilities\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.792967 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4vmwt\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.827636 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-45xrd"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.828861 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.843904 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-utilities\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.843962 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nblr8\" (UniqueName: \"kubernetes.io/projected/057894ef-e113-4b3d-89d7-178aec98eae9-kube-api-access-nblr8\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.843986 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-catalog-content\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.844009 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm2n2\" (UniqueName: \"kubernetes.io/projected/8c778c82-e9ab-40e7-b084-07195587a975-kube-api-access-qm2n2\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.844029 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-catalog-content\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.844047 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-utilities\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.844642 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-utilities\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.844895 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-catalog-content\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.848942 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.852549 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-45xrd"] Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.866998 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm2n2\" (UniqueName: \"kubernetes.io/projected/8c778c82-e9ab-40e7-b084-07195587a975-kube-api-access-qm2n2\") pod \"certified-operators-k4wz9\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.945759 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/370cd8fe-7bde-4e36-b8bc-dca582137d44-secret-volume\") pod \"370cd8fe-7bde-4e36-b8bc-dca582137d44\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.945865 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/370cd8fe-7bde-4e36-b8bc-dca582137d44-config-volume\") pod \"370cd8fe-7bde-4e36-b8bc-dca582137d44\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.945894 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndxsf\" (UniqueName: \"kubernetes.io/projected/370cd8fe-7bde-4e36-b8bc-dca582137d44-kube-api-access-ndxsf\") pod \"370cd8fe-7bde-4e36-b8bc-dca582137d44\" (UID: \"370cd8fe-7bde-4e36-b8bc-dca582137d44\") " Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.946143 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-utilities\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.946212 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-879b6\" (UniqueName: \"kubernetes.io/projected/a403818b-8748-413d-80e7-cc5217779236-kube-api-access-879b6\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.946244 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-catalog-content\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.946355 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-utilities\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.946394 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nblr8\" (UniqueName: \"kubernetes.io/projected/057894ef-e113-4b3d-89d7-178aec98eae9-kube-api-access-nblr8\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.946872 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-catalog-content\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.947123 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370cd8fe-7bde-4e36-b8bc-dca582137d44-config-volume" (OuterVolumeSpecName: "config-volume") pod "370cd8fe-7bde-4e36-b8bc-dca582137d44" (UID: "370cd8fe-7bde-4e36-b8bc-dca582137d44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.947414 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-utilities\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.947703 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-catalog-content\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.950705 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.952202 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370cd8fe-7bde-4e36-b8bc-dca582137d44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "370cd8fe-7bde-4e36-b8bc-dca582137d44" (UID: "370cd8fe-7bde-4e36-b8bc-dca582137d44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.952658 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370cd8fe-7bde-4e36-b8bc-dca582137d44-kube-api-access-ndxsf" (OuterVolumeSpecName: "kube-api-access-ndxsf") pod "370cd8fe-7bde-4e36-b8bc-dca582137d44" (UID: "370cd8fe-7bde-4e36-b8bc-dca582137d44"). InnerVolumeSpecName "kube-api-access-ndxsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.969835 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nblr8\" (UniqueName: \"kubernetes.io/projected/057894ef-e113-4b3d-89d7-178aec98eae9-kube-api-access-nblr8\") pod \"community-operators-smq9b\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.990810 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.991491 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.998019 5007 patch_prober.go:28] interesting pod/apiserver-76f77b778f-d8pd9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]log ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]etcd ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/max-in-flight-filter ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 18 19:41:06 crc kubenswrapper[5007]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/project.openshift.io-projectcache ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/openshift.io-startinformers ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 18 19:41:06 crc kubenswrapper[5007]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 19:41:06 crc kubenswrapper[5007]: livez check failed Feb 18 19:41:06 crc kubenswrapper[5007]: I0218 19:41:06.998059 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" podUID="cee7777d-8cb6-4c90-a650-4da91faaf021" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.032397 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.047844 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-879b6\" (UniqueName: \"kubernetes.io/projected/a403818b-8748-413d-80e7-cc5217779236-kube-api-access-879b6\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.047904 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-catalog-content\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.047997 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-utilities\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.048087 5007 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/370cd8fe-7bde-4e36-b8bc-dca582137d44-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.048105 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndxsf\" (UniqueName: \"kubernetes.io/projected/370cd8fe-7bde-4e36-b8bc-dca582137d44-kube-api-access-ndxsf\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.048116 5007 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/370cd8fe-7bde-4e36-b8bc-dca582137d44-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.048717 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-utilities\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.049018 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-catalog-content\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.060765 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.069128 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-879b6\" (UniqueName: \"kubernetes.io/projected/a403818b-8748-413d-80e7-cc5217779236-kube-api-access-879b6\") pod \"certified-operators-45xrd\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.176862 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.222317 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z8ph5"] Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.242965 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" event={"ID":"370cd8fe-7bde-4e36-b8bc-dca582137d44","Type":"ContainerDied","Data":"b1a451967e7b7a5b706b502381fda7e54d500b2d6e0e894c02e5f431d3fa1191"} Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.243004 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a451967e7b7a5b706b502381fda7e54d500b2d6e0e894c02e5f431d3fa1191" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.243083 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.251756 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.251804 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.252004 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.252026 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.257289 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028","Type":"ContainerStarted","Data":"15ef8464fa3608eced63961690403fe6d64af97a0cdefbcc033ad4cf3da5fe79"} Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.257353 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028","Type":"ContainerStarted","Data":"d9ac36e5e41a3d1af2a23f38007559c0245520087297f4d9233cc4920b0041dc"} Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.280036 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.280005574 podStartE2EDuration="3.280005574s" podCreationTimestamp="2026-02-18 19:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:07.275710658 +0000 UTC m=+152.057830192" watchObservedRunningTime="2026-02-18 19:41:07.280005574 +0000 UTC m=+152.062125098" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.351278 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4wz9"] Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.461974 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smq9b"] Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.504285 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4vmwt"] Feb 18 19:41:07 crc kubenswrapper[5007]: W0218 19:41:07.545482 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf895f35a_b144_444a_9329_3e7c3d0a1d9b.slice/crio-c70a0963ee283bf96e88a8f668482f11779d5673d0b9091533573fe5539e1e96 WatchSource:0}: Error finding container c70a0963ee283bf96e88a8f668482f11779d5673d0b9091533573fe5539e1e96: Status 404 returned error can't find the container with id c70a0963ee283bf96e88a8f668482f11779d5673d0b9091533573fe5539e1e96 Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.554726 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-45xrd"] Feb 18 19:41:07 crc kubenswrapper[5007]: W0218 19:41:07.564499 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda403818b_8748_413d_80e7_cc5217779236.slice/crio-b38a995e1c088172278435dafc7cf20a8960e61c8976315c79dbe8940aa8bf15 WatchSource:0}: Error finding container b38a995e1c088172278435dafc7cf20a8960e61c8976315c79dbe8940aa8bf15: Status 404 returned error can't find the container with id b38a995e1c088172278435dafc7cf20a8960e61c8976315c79dbe8940aa8bf15 Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.623615 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.627133 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:07 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:07 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:07 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.627173 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.841860 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.843089 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.848907 5007 patch_prober.go:28] interesting pod/console-f9d7485db-5kxmr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.849039 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5kxmr" podUID="7afb27dc-b979-45ec-ae79-5595d0dedb6e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.919384 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.958160 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tw86n" Feb 18 19:41:07 crc kubenswrapper[5007]: I0218 19:41:07.991759 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.215621 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vddlz"] Feb 18 19:41:08 crc kubenswrapper[5007]: E0218 19:41:08.215996 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370cd8fe-7bde-4e36-b8bc-dca582137d44" containerName="collect-profiles" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.216008 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="370cd8fe-7bde-4e36-b8bc-dca582137d44" containerName="collect-profiles" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.216145 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="370cd8fe-7bde-4e36-b8bc-dca582137d44" containerName="collect-profiles" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.216904 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.218865 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.233130 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vddlz"] Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.263929 5007 generic.go:334] "Generic (PLEG): container finished" podID="8c778c82-e9ab-40e7-b084-07195587a975" containerID="6a1bf21b9248245a518d8f094ed7cbd58efa2126e86b655aef98a3f825b4eb7c" exitCode=0 Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.263994 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4wz9" event={"ID":"8c778c82-e9ab-40e7-b084-07195587a975","Type":"ContainerDied","Data":"6a1bf21b9248245a518d8f094ed7cbd58efa2126e86b655aef98a3f825b4eb7c"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.264024 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4wz9" event={"ID":"8c778c82-e9ab-40e7-b084-07195587a975","Type":"ContainerStarted","Data":"792e662a999f142236f4f32cb21b048854025d0e45424e77cd79051362d844d4"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.266118 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.267373 5007 generic.go:334] "Generic (PLEG): container finished" podID="057894ef-e113-4b3d-89d7-178aec98eae9" containerID="bcda3bb14031c1099a5305a61a1187551070d472b5666d01fa5e680779cc3f61" exitCode=0 Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.267422 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smq9b" event={"ID":"057894ef-e113-4b3d-89d7-178aec98eae9","Type":"ContainerDied","Data":"bcda3bb14031c1099a5305a61a1187551070d472b5666d01fa5e680779cc3f61"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.267483 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smq9b" event={"ID":"057894ef-e113-4b3d-89d7-178aec98eae9","Type":"ContainerStarted","Data":"b9337febda48ef164c1b95dd035ecedb36439d2b2e5340ea3e180731108218cc"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.272859 5007 generic.go:334] "Generic (PLEG): container finished" podID="d5fc694f-6f70-4bf7-a2ac-8e6920d0a028" containerID="15ef8464fa3608eced63961690403fe6d64af97a0cdefbcc033ad4cf3da5fe79" exitCode=0 Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.272949 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028","Type":"ContainerDied","Data":"15ef8464fa3608eced63961690403fe6d64af97a0cdefbcc033ad4cf3da5fe79"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.274385 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" event={"ID":"f895f35a-b144-444a-9329-3e7c3d0a1d9b","Type":"ContainerStarted","Data":"9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.274423 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" event={"ID":"f895f35a-b144-444a-9329-3e7c3d0a1d9b","Type":"ContainerStarted","Data":"c70a0963ee283bf96e88a8f668482f11779d5673d0b9091533573fe5539e1e96"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.274468 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.275648 5007 generic.go:334] "Generic (PLEG): container finished" podID="a403818b-8748-413d-80e7-cc5217779236" containerID="fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584" exitCode=0 Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.275834 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45xrd" event={"ID":"a403818b-8748-413d-80e7-cc5217779236","Type":"ContainerDied","Data":"fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.275865 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45xrd" event={"ID":"a403818b-8748-413d-80e7-cc5217779236","Type":"ContainerStarted","Data":"b38a995e1c088172278435dafc7cf20a8960e61c8976315c79dbe8940aa8bf15"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.277400 5007 generic.go:334] "Generic (PLEG): container finished" podID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerID="677a69f46dfe9e8674ae2fc79854a34b355cb4a38962ceb2aea7c49460d3a951" exitCode=0 Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.277440 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8ph5" event={"ID":"84dca2c7-0ed9-4824-8431-88a26864b81c","Type":"ContainerDied","Data":"677a69f46dfe9e8674ae2fc79854a34b355cb4a38962ceb2aea7c49460d3a951"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.277473 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8ph5" event={"ID":"84dca2c7-0ed9-4824-8431-88a26864b81c","Type":"ContainerStarted","Data":"e9a9590641b499658f791976478da87fd8435b4caee11949430782546ca44c91"} Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.298664 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pgvrb" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.344054 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" podStartSLOduration=131.344027344 podStartE2EDuration="2m11.344027344s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:08.342355575 +0000 UTC m=+153.124475109" watchObservedRunningTime="2026-02-18 19:41:08.344027344 +0000 UTC m=+153.126146868" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.366719 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-catalog-content\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.366796 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q966\" (UniqueName: \"kubernetes.io/projected/9cbfc029-9e57-4f39-942c-47f3c8f866b2-kube-api-access-5q966\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.366832 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-utilities\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.469314 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-utilities\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.469480 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-catalog-content\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.469501 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q966\" (UniqueName: \"kubernetes.io/projected/9cbfc029-9e57-4f39-942c-47f3c8f866b2-kube-api-access-5q966\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.470200 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-utilities\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.470210 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-catalog-content\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.507868 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q966\" (UniqueName: \"kubernetes.io/projected/9cbfc029-9e57-4f39-942c-47f3c8f866b2-kube-api-access-5q966\") pod \"redhat-marketplace-vddlz\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.554408 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.618845 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kg74r"] Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.619944 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.631936 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:08 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:08 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:08 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.631991 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.634336 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg74r"] Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.773894 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-catalog-content\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.774015 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92l7\" (UniqueName: \"kubernetes.io/projected/438564b4-5423-4e65-b05b-054043ad93c6-kube-api-access-m92l7\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.774050 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-utilities\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.872092 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vddlz"] Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.875407 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m92l7\" (UniqueName: \"kubernetes.io/projected/438564b4-5423-4e65-b05b-054043ad93c6-kube-api-access-m92l7\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.875472 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-utilities\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.875515 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-catalog-content\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.875980 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-catalog-content\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.876084 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-utilities\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.896458 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m92l7\" (UniqueName: \"kubernetes.io/projected/438564b4-5423-4e65-b05b-054043ad93c6-kube-api-access-m92l7\") pod \"redhat-marketplace-kg74r\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:08 crc kubenswrapper[5007]: I0218 19:41:08.949156 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.286602 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg74r"] Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.288793 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vddlz" event={"ID":"9cbfc029-9e57-4f39-942c-47f3c8f866b2","Type":"ContainerStarted","Data":"5f33155305a27da3c64cd453c185041d05d25fc2cf99848a4c9797a162857bf5"} Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.419312 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mdh7g"] Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.420407 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.422535 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.433612 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdh7g"] Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.493756 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-catalog-content\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.493817 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-utilities\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.493848 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq78t\" (UniqueName: \"kubernetes.io/projected/3504cb19-2058-4787-b5fb-c0b64aa2322f-kube-api-access-hq78t\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.597116 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq78t\" (UniqueName: \"kubernetes.io/projected/3504cb19-2058-4787-b5fb-c0b64aa2322f-kube-api-access-hq78t\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.597230 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-catalog-content\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.597271 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-utilities\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.597789 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-utilities\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.598072 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-catalog-content\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.641248 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:09 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:09 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:09 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.641338 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.646241 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq78t\" (UniqueName: \"kubernetes.io/projected/3504cb19-2058-4787-b5fb-c0b64aa2322f-kube-api-access-hq78t\") pod \"redhat-operators-mdh7g\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.742821 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.799937 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.825488 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qk22p"] Feb 18 19:41:09 crc kubenswrapper[5007]: E0218 19:41:09.826161 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fc694f-6f70-4bf7-a2ac-8e6920d0a028" containerName="pruner" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.826184 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fc694f-6f70-4bf7-a2ac-8e6920d0a028" containerName="pruner" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.826318 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5fc694f-6f70-4bf7-a2ac-8e6920d0a028" containerName="pruner" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.828579 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.840372 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qk22p"] Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.899989 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kube-api-access\") pod \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\" (UID: \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\") " Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.900095 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kubelet-dir\") pod \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\" (UID: \"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028\") " Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.900517 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-catalog-content\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.900580 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjsx5\" (UniqueName: \"kubernetes.io/projected/303d7555-90c4-459c-9dcf-181f93d89ed8-kube-api-access-jjsx5\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.900665 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-utilities\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.900838 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d5fc694f-6f70-4bf7-a2ac-8e6920d0a028" (UID: "d5fc694f-6f70-4bf7-a2ac-8e6920d0a028"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:41:09 crc kubenswrapper[5007]: I0218 19:41:09.915115 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d5fc694f-6f70-4bf7-a2ac-8e6920d0a028" (UID: "d5fc694f-6f70-4bf7-a2ac-8e6920d0a028"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.001733 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-catalog-content\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.001817 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjsx5\" (UniqueName: \"kubernetes.io/projected/303d7555-90c4-459c-9dcf-181f93d89ed8-kube-api-access-jjsx5\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.001861 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-utilities\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.001980 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.001995 5007 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5fc694f-6f70-4bf7-a2ac-8e6920d0a028-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.003047 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-utilities\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.003155 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-catalog-content\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.027773 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjsx5\" (UniqueName: \"kubernetes.io/projected/303d7555-90c4-459c-9dcf-181f93d89ed8-kube-api-access-jjsx5\") pod \"redhat-operators-qk22p\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.165885 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.232498 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdh7g"] Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.320320 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdh7g" event={"ID":"3504cb19-2058-4787-b5fb-c0b64aa2322f","Type":"ContainerStarted","Data":"e92c338bbe119dae19a8137e80d73cdf7bdc269a580658990119b306688a1883"} Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.324905 5007 generic.go:334] "Generic (PLEG): container finished" podID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerID="4c0c5753915e08973ae03f850fed76b02158693c4b710414698bd5a97c0384ee" exitCode=0 Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.324977 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vddlz" event={"ID":"9cbfc029-9e57-4f39-942c-47f3c8f866b2","Type":"ContainerDied","Data":"4c0c5753915e08973ae03f850fed76b02158693c4b710414698bd5a97c0384ee"} Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.327920 5007 generic.go:334] "Generic (PLEG): container finished" podID="438564b4-5423-4e65-b05b-054043ad93c6" containerID="c8727577e45ba9fd5c35895e188d42d023434d8f710ad89ca4d2471bc1521aac" exitCode=0 Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.328033 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg74r" event={"ID":"438564b4-5423-4e65-b05b-054043ad93c6","Type":"ContainerDied","Data":"c8727577e45ba9fd5c35895e188d42d023434d8f710ad89ca4d2471bc1521aac"} Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.328111 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg74r" event={"ID":"438564b4-5423-4e65-b05b-054043ad93c6","Type":"ContainerStarted","Data":"c7a572fa5231a337706a0f3e7c627aa59bca14eb54786c3ab84066d1b7478779"} Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.343979 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d5fc694f-6f70-4bf7-a2ac-8e6920d0a028","Type":"ContainerDied","Data":"d9ac36e5e41a3d1af2a23f38007559c0245520087297f4d9233cc4920b0041dc"} Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.344027 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9ac36e5e41a3d1af2a23f38007559c0245520087297f4d9233cc4920b0041dc" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.344125 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.571838 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qk22p"] Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.627589 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:10 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:10 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:10 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:10 crc kubenswrapper[5007]: I0218 19:41:10.627688 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:10 crc kubenswrapper[5007]: W0218 19:41:10.636134 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod303d7555_90c4_459c_9dcf_181f93d89ed8.slice/crio-bd2e8a78bb79414719018c25e92332ff6b1b8b853e96ac74c3748232032272d6 WatchSource:0}: Error finding container bd2e8a78bb79414719018c25e92332ff6b1b8b853e96ac74c3748232032272d6: Status 404 returned error can't find the container with id bd2e8a78bb79414719018c25e92332ff6b1b8b853e96ac74c3748232032272d6 Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.264472 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.266200 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.272240 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.272406 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.288457 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.376581 5007 generic.go:334] "Generic (PLEG): container finished" podID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerID="080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1" exitCode=0 Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.376651 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk22p" event={"ID":"303d7555-90c4-459c-9dcf-181f93d89ed8","Type":"ContainerDied","Data":"080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1"} Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.376732 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk22p" event={"ID":"303d7555-90c4-459c-9dcf-181f93d89ed8","Type":"ContainerStarted","Data":"bd2e8a78bb79414719018c25e92332ff6b1b8b853e96ac74c3748232032272d6"} Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.385594 5007 generic.go:334] "Generic (PLEG): container finished" podID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerID="df67e6dd85644c1e09be67cb107a2ad9607d4f80ce3d4b3b629390e37f9b3dd9" exitCode=0 Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.385638 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdh7g" event={"ID":"3504cb19-2058-4787-b5fb-c0b64aa2322f","Type":"ContainerDied","Data":"df67e6dd85644c1e09be67cb107a2ad9607d4f80ce3d4b3b629390e37f9b3dd9"} Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.432616 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4b30781-7982-42a9-a208-e71364ff8fa4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a4b30781-7982-42a9-a208-e71364ff8fa4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.432710 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4b30781-7982-42a9-a208-e71364ff8fa4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a4b30781-7982-42a9-a208-e71364ff8fa4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.534072 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4b30781-7982-42a9-a208-e71364ff8fa4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a4b30781-7982-42a9-a208-e71364ff8fa4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.534134 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4b30781-7982-42a9-a208-e71364ff8fa4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a4b30781-7982-42a9-a208-e71364ff8fa4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.534228 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4b30781-7982-42a9-a208-e71364ff8fa4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a4b30781-7982-42a9-a208-e71364ff8fa4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.554260 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4b30781-7982-42a9-a208-e71364ff8fa4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a4b30781-7982-42a9-a208-e71364ff8fa4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.609980 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.627545 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:11 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:11 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:11 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.627630 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:11 crc kubenswrapper[5007]: I0218 19:41:11.990630 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:41:12 crc kubenswrapper[5007]: I0218 19:41:12.000769 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d8pd9" Feb 18 19:41:12 crc kubenswrapper[5007]: I0218 19:41:12.119197 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:41:12 crc kubenswrapper[5007]: I0218 19:41:12.423563 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a4b30781-7982-42a9-a208-e71364ff8fa4","Type":"ContainerStarted","Data":"65415ccbb6e6ba6b7c42494e93605ba2e2b76bdd2ddf0290aea504a70bd25dff"} Feb 18 19:41:12 crc kubenswrapper[5007]: I0218 19:41:12.626590 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:12 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:12 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:12 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:12 crc kubenswrapper[5007]: I0218 19:41:12.626689 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:13 crc kubenswrapper[5007]: I0218 19:41:13.051065 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8rhk6" Feb 18 19:41:13 crc kubenswrapper[5007]: I0218 19:41:13.439721 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a4b30781-7982-42a9-a208-e71364ff8fa4","Type":"ContainerStarted","Data":"5950620d8f21bfa24982ea97082ad50ac86f4f3c0bcdeeaac949e2988597aa9d"} Feb 18 19:41:13 crc kubenswrapper[5007]: I0218 19:41:13.473891 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.473873559 podStartE2EDuration="2.473873559s" podCreationTimestamp="2026-02-18 19:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:13.473409365 +0000 UTC m=+158.255528889" watchObservedRunningTime="2026-02-18 19:41:13.473873559 +0000 UTC m=+158.255993083" Feb 18 19:41:13 crc kubenswrapper[5007]: I0218 19:41:13.628641 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:13 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:13 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:13 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:13 crc kubenswrapper[5007]: I0218 19:41:13.628740 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:14 crc kubenswrapper[5007]: I0218 19:41:14.470940 5007 generic.go:334] "Generic (PLEG): container finished" podID="a4b30781-7982-42a9-a208-e71364ff8fa4" containerID="5950620d8f21bfa24982ea97082ad50ac86f4f3c0bcdeeaac949e2988597aa9d" exitCode=0 Feb 18 19:41:14 crc kubenswrapper[5007]: I0218 19:41:14.471007 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a4b30781-7982-42a9-a208-e71364ff8fa4","Type":"ContainerDied","Data":"5950620d8f21bfa24982ea97082ad50ac86f4f3c0bcdeeaac949e2988597aa9d"} Feb 18 19:41:14 crc kubenswrapper[5007]: I0218 19:41:14.627865 5007 patch_prober.go:28] interesting pod/router-default-5444994796-764l7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:41:14 crc kubenswrapper[5007]: [-]has-synced failed: reason withheld Feb 18 19:41:14 crc kubenswrapper[5007]: [+]process-running ok Feb 18 19:41:14 crc kubenswrapper[5007]: healthz check failed Feb 18 19:41:14 crc kubenswrapper[5007]: I0218 19:41:14.627954 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-764l7" podUID="78ae9445-87b0-4e87-a46c-d2a0146d2051" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:41:15 crc kubenswrapper[5007]: I0218 19:41:15.626216 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:41:15 crc kubenswrapper[5007]: I0218 19:41:15.629309 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-764l7" Feb 18 19:41:15 crc kubenswrapper[5007]: I0218 19:41:15.670424 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:41:15 crc kubenswrapper[5007]: I0218 19:41:15.670496 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:41:17 crc kubenswrapper[5007]: I0218 19:41:17.251536 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:17 crc kubenswrapper[5007]: I0218 19:41:17.251820 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:17 crc kubenswrapper[5007]: I0218 19:41:17.251969 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:17 crc kubenswrapper[5007]: I0218 19:41:17.252033 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:17 crc kubenswrapper[5007]: I0218 19:41:17.848483 5007 patch_prober.go:28] interesting pod/console-f9d7485db-5kxmr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 18 19:41:17 crc kubenswrapper[5007]: I0218 19:41:17.848547 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5kxmr" podUID="7afb27dc-b979-45ec-ae79-5595d0dedb6e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 18 19:41:19 crc kubenswrapper[5007]: I0218 19:41:19.840614 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:41:19 crc kubenswrapper[5007]: I0218 19:41:19.855247 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f72bbfc-96e8-4151-9291-bb3da55e7d2b-metrics-certs\") pod \"network-metrics-daemon-96xnl\" (UID: \"7f72bbfc-96e8-4151-9291-bb3da55e7d2b\") " pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:41:20 crc kubenswrapper[5007]: I0218 19:41:20.124233 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-96xnl" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.067400 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.227225 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.251749 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.251834 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.252457 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.252525 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.252584 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.253110 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.253168 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.253249 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"d5aee573afc0f80e121754fa1a797cbd6005b7be81daba670a18619e61ca1991"} pod="openshift-console/downloads-7954f5f757-cjq4g" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.253349 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" containerID="cri-o://d5aee573afc0f80e121754fa1a797cbd6005b7be81daba670a18619e61ca1991" gracePeriod=2 Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.387417 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4b30781-7982-42a9-a208-e71364ff8fa4-kubelet-dir\") pod \"a4b30781-7982-42a9-a208-e71364ff8fa4\" (UID: \"a4b30781-7982-42a9-a208-e71364ff8fa4\") " Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.387785 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4b30781-7982-42a9-a208-e71364ff8fa4-kube-api-access\") pod \"a4b30781-7982-42a9-a208-e71364ff8fa4\" (UID: \"a4b30781-7982-42a9-a208-e71364ff8fa4\") " Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.387567 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4b30781-7982-42a9-a208-e71364ff8fa4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a4b30781-7982-42a9-a208-e71364ff8fa4" (UID: "a4b30781-7982-42a9-a208-e71364ff8fa4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.388337 5007 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4b30781-7982-42a9-a208-e71364ff8fa4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.392836 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b30781-7982-42a9-a208-e71364ff8fa4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a4b30781-7982-42a9-a208-e71364ff8fa4" (UID: "a4b30781-7982-42a9-a208-e71364ff8fa4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.490007 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4b30781-7982-42a9-a208-e71364ff8fa4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.634466 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a4b30781-7982-42a9-a208-e71364ff8fa4","Type":"ContainerDied","Data":"65415ccbb6e6ba6b7c42494e93605ba2e2b76bdd2ddf0290aea504a70bd25dff"} Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.634512 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.634516 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65415ccbb6e6ba6b7c42494e93605ba2e2b76bdd2ddf0290aea504a70bd25dff" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.846887 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:41:27 crc kubenswrapper[5007]: I0218 19:41:27.851834 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:41:28 crc kubenswrapper[5007]: I0218 19:41:28.641976 5007 generic.go:334] "Generic (PLEG): container finished" podID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerID="d5aee573afc0f80e121754fa1a797cbd6005b7be81daba670a18619e61ca1991" exitCode=0 Feb 18 19:41:28 crc kubenswrapper[5007]: I0218 19:41:28.642056 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cjq4g" event={"ID":"3da7b766-72c7-4f00-9787-4fc5d86ef9c0","Type":"ContainerDied","Data":"d5aee573afc0f80e121754fa1a797cbd6005b7be81daba670a18619e61ca1991"} Feb 18 19:41:37 crc kubenswrapper[5007]: I0218 19:41:37.251625 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:37 crc kubenswrapper[5007]: I0218 19:41:37.252161 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:37 crc kubenswrapper[5007]: I0218 19:41:37.969675 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s9wh5" Feb 18 19:41:40 crc kubenswrapper[5007]: E0218 19:41:40.832573 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 19:41:40 crc kubenswrapper[5007]: E0218 19:41:40.833086 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm2n2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k4wz9_openshift-marketplace(8c778c82-e9ab-40e7-b084-07195587a975): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:41:40 crc kubenswrapper[5007]: E0218 19:41:40.834344 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k4wz9" podUID="8c778c82-e9ab-40e7-b084-07195587a975" Feb 18 19:41:42 crc kubenswrapper[5007]: E0218 19:41:42.219713 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k4wz9" podUID="8c778c82-e9ab-40e7-b084-07195587a975" Feb 18 19:41:42 crc kubenswrapper[5007]: E0218 19:41:42.855207 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:41:42 crc kubenswrapper[5007]: E0218 19:41:42.855365 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nblr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-smq9b_openshift-marketplace(057894ef-e113-4b3d-89d7-178aec98eae9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:41:42 crc kubenswrapper[5007]: E0218 19:41:42.856589 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-smq9b" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" Feb 18 19:41:44 crc kubenswrapper[5007]: I0218 19:41:44.368975 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:41:45 crc kubenswrapper[5007]: I0218 19:41:45.664071 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:41:45 crc kubenswrapper[5007]: I0218 19:41:45.664133 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.657196 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:41:46 crc kubenswrapper[5007]: E0218 19:41:46.657505 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b30781-7982-42a9-a208-e71364ff8fa4" containerName="pruner" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.657528 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b30781-7982-42a9-a208-e71364ff8fa4" containerName="pruner" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.657668 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b30781-7982-42a9-a208-e71364ff8fa4" containerName="pruner" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.658423 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.660676 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.661529 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.671973 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.676810 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4b72398-f484-4de3-840a-97ec9db032ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e4b72398-f484-4de3-840a-97ec9db032ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.676853 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4b72398-f484-4de3-840a-97ec9db032ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e4b72398-f484-4de3-840a-97ec9db032ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.778578 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4b72398-f484-4de3-840a-97ec9db032ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e4b72398-f484-4de3-840a-97ec9db032ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.779010 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4b72398-f484-4de3-840a-97ec9db032ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e4b72398-f484-4de3-840a-97ec9db032ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.779155 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4b72398-f484-4de3-840a-97ec9db032ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e4b72398-f484-4de3-840a-97ec9db032ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.796678 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4b72398-f484-4de3-840a-97ec9db032ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e4b72398-f484-4de3-840a-97ec9db032ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:46 crc kubenswrapper[5007]: E0218 19:41:46.931235 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-smq9b" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" Feb 18 19:41:46 crc kubenswrapper[5007]: I0218 19:41:46.988366 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.013373 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.013596 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq78t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mdh7g_openshift-marketplace(3504cb19-2058-4787-b5fb-c0b64aa2322f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.014176 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.014329 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-z8ph5_openshift-marketplace(84dca2c7-0ed9-4824-8431-88a26864b81c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.015260 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mdh7g" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.016332 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-z8ph5" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.035709 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.035872 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-879b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-45xrd_openshift-marketplace(a403818b-8748-413d-80e7-cc5217779236): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:41:47 crc kubenswrapper[5007]: E0218 19:41:47.037130 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-45xrd" podUID="a403818b-8748-413d-80e7-cc5217779236" Feb 18 19:41:47 crc kubenswrapper[5007]: I0218 19:41:47.252682 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:47 crc kubenswrapper[5007]: I0218 19:41:47.252750 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.460922 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-45xrd" podUID="a403818b-8748-413d-80e7-cc5217779236" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.461329 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-z8ph5" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.462770 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mdh7g" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.533754 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.534265 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5q966,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vddlz_openshift-marketplace(9cbfc029-9e57-4f39-942c-47f3c8f866b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.535598 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vddlz" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.563061 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.563322 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjsx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qk22p_openshift-marketplace(303d7555-90c4-459c-9dcf-181f93d89ed8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.564674 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qk22p" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.571973 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.572090 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m92l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kg74r_openshift-marketplace(438564b4-5423-4e65-b05b-054043ad93c6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.573215 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kg74r" podUID="438564b4-5423-4e65-b05b-054043ad93c6" Feb 18 19:41:48 crc kubenswrapper[5007]: I0218 19:41:48.765959 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cjq4g" event={"ID":"3da7b766-72c7-4f00-9787-4fc5d86ef9c0","Type":"ContainerStarted","Data":"c7d378ea67bcb6a7f6a2fafdcaf28152d8bd49035b79ecbdff45dc05aa3cde11"} Feb 18 19:41:48 crc kubenswrapper[5007]: I0218 19:41:48.767572 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:48 crc kubenswrapper[5007]: I0218 19:41:48.767616 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.775674 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qk22p" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.775744 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kg74r" podUID="438564b4-5423-4e65-b05b-054043ad93c6" Feb 18 19:41:48 crc kubenswrapper[5007]: E0218 19:41:48.775807 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vddlz" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" Feb 18 19:41:48 crc kubenswrapper[5007]: I0218 19:41:48.913171 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-96xnl"] Feb 18 19:41:48 crc kubenswrapper[5007]: I0218 19:41:48.999589 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.773763 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e4b72398-f484-4de3-840a-97ec9db032ae","Type":"ContainerStarted","Data":"2b537138d4cd1bdd497b4228580bc544281d050f1e58860eef68dedf444dd5d7"} Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.773976 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e4b72398-f484-4de3-840a-97ec9db032ae","Type":"ContainerStarted","Data":"87c440126e92d62277acb1863b2103d44f658ee7f68d0b55f549f3d3df98333b"} Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.776234 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-96xnl" event={"ID":"7f72bbfc-96e8-4151-9291-bb3da55e7d2b","Type":"ContainerStarted","Data":"3d0d864acfb06c4c0340dd1e1564fef2828412eb446f7a7fb76c05de618ce86a"} Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.776287 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-96xnl" event={"ID":"7f72bbfc-96e8-4151-9291-bb3da55e7d2b","Type":"ContainerStarted","Data":"fd486ba5434ed6c97cdc4f16dc144f706616df29838438c59a1958db7571e5b5"} Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.776300 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-96xnl" event={"ID":"7f72bbfc-96e8-4151-9291-bb3da55e7d2b","Type":"ContainerStarted","Data":"89354d09417448406383a58bb4b44f9e7ba5f8b174a3c6903075612482e95adb"} Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.776520 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.776851 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.776910 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.790162 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.790147975 podStartE2EDuration="3.790147975s" podCreationTimestamp="2026-02-18 19:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:49.78690389 +0000 UTC m=+194.569023424" watchObservedRunningTime="2026-02-18 19:41:49.790147975 +0000 UTC m=+194.572267499" Feb 18 19:41:49 crc kubenswrapper[5007]: I0218 19:41:49.812171 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-96xnl" podStartSLOduration=172.812155393 podStartE2EDuration="2m52.812155393s" podCreationTimestamp="2026-02-18 19:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:49.802935942 +0000 UTC m=+194.585055476" watchObservedRunningTime="2026-02-18 19:41:49.812155393 +0000 UTC m=+194.594274917" Feb 18 19:41:50 crc kubenswrapper[5007]: I0218 19:41:50.783547 5007 generic.go:334] "Generic (PLEG): container finished" podID="e4b72398-f484-4de3-840a-97ec9db032ae" containerID="2b537138d4cd1bdd497b4228580bc544281d050f1e58860eef68dedf444dd5d7" exitCode=0 Feb 18 19:41:50 crc kubenswrapper[5007]: I0218 19:41:50.783649 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e4b72398-f484-4de3-840a-97ec9db032ae","Type":"ContainerDied","Data":"2b537138d4cd1bdd497b4228580bc544281d050f1e58860eef68dedf444dd5d7"} Feb 18 19:41:50 crc kubenswrapper[5007]: I0218 19:41:50.784648 5007 patch_prober.go:28] interesting pod/downloads-7954f5f757-cjq4g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Feb 18 19:41:50 crc kubenswrapper[5007]: I0218 19:41:50.784698 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cjq4g" podUID="3da7b766-72c7-4f00-9787-4fc5d86ef9c0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.046109 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.174293 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4b72398-f484-4de3-840a-97ec9db032ae-kubelet-dir\") pod \"e4b72398-f484-4de3-840a-97ec9db032ae\" (UID: \"e4b72398-f484-4de3-840a-97ec9db032ae\") " Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.174812 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4b72398-f484-4de3-840a-97ec9db032ae-kube-api-access\") pod \"e4b72398-f484-4de3-840a-97ec9db032ae\" (UID: \"e4b72398-f484-4de3-840a-97ec9db032ae\") " Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.174526 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4b72398-f484-4de3-840a-97ec9db032ae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e4b72398-f484-4de3-840a-97ec9db032ae" (UID: "e4b72398-f484-4de3-840a-97ec9db032ae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.175854 5007 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4b72398-f484-4de3-840a-97ec9db032ae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.181137 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b72398-f484-4de3-840a-97ec9db032ae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e4b72398-f484-4de3-840a-97ec9db032ae" (UID: "e4b72398-f484-4de3-840a-97ec9db032ae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.276854 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4b72398-f484-4de3-840a-97ec9db032ae-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.795554 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e4b72398-f484-4de3-840a-97ec9db032ae","Type":"ContainerDied","Data":"87c440126e92d62277acb1863b2103d44f658ee7f68d0b55f549f3d3df98333b"} Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.795601 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:41:52 crc kubenswrapper[5007]: I0218 19:41:52.795605 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c440126e92d62277acb1863b2103d44f658ee7f68d0b55f549f3d3df98333b" Feb 18 19:41:53 crc kubenswrapper[5007]: I0218 19:41:53.855590 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:41:53 crc kubenswrapper[5007]: E0218 19:41:53.856095 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b72398-f484-4de3-840a-97ec9db032ae" containerName="pruner" Feb 18 19:41:53 crc kubenswrapper[5007]: I0218 19:41:53.856111 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b72398-f484-4de3-840a-97ec9db032ae" containerName="pruner" Feb 18 19:41:53 crc kubenswrapper[5007]: I0218 19:41:53.856382 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b72398-f484-4de3-840a-97ec9db032ae" containerName="pruner" Feb 18 19:41:53 crc kubenswrapper[5007]: I0218 19:41:53.857128 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:53 crc kubenswrapper[5007]: I0218 19:41:53.863175 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:41:53 crc kubenswrapper[5007]: I0218 19:41:53.863595 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:41:53 crc kubenswrapper[5007]: I0218 19:41:53.868331 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.003370 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.003471 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.003542 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-var-lock\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.104846 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.104934 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-var-lock\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.104982 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.105032 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.105035 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-var-lock\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.121681 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.174257 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.497137 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:41:54 crc kubenswrapper[5007]: W0218 19:41:54.611057 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podef8063d1_2e56_42bd_9310_993ac6e4d2b9.slice/crio-70410805717f9ff29843774518e6d18b8a6844faa6d5f25ad08db192499d4e0e WatchSource:0}: Error finding container 70410805717f9ff29843774518e6d18b8a6844faa6d5f25ad08db192499d4e0e: Status 404 returned error can't find the container with id 70410805717f9ff29843774518e6d18b8a6844faa6d5f25ad08db192499d4e0e Feb 18 19:41:54 crc kubenswrapper[5007]: I0218 19:41:54.806145 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef8063d1-2e56-42bd-9310-993ac6e4d2b9","Type":"ContainerStarted","Data":"70410805717f9ff29843774518e6d18b8a6844faa6d5f25ad08db192499d4e0e"} Feb 18 19:41:55 crc kubenswrapper[5007]: I0218 19:41:55.812614 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef8063d1-2e56-42bd-9310-993ac6e4d2b9","Type":"ContainerStarted","Data":"3f222f2ebaf83cfb8bee01b1c5a9e3e66db930557772ddcdc4dc0d28a12c9cee"} Feb 18 19:41:55 crc kubenswrapper[5007]: I0218 19:41:55.828984 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.828964402 podStartE2EDuration="2.828964402s" podCreationTimestamp="2026-02-18 19:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:55.82711332 +0000 UTC m=+200.609232854" watchObservedRunningTime="2026-02-18 19:41:55.828964402 +0000 UTC m=+200.611083926" Feb 18 19:41:56 crc kubenswrapper[5007]: I0218 19:41:56.819213 5007 generic.go:334] "Generic (PLEG): container finished" podID="8c778c82-e9ab-40e7-b084-07195587a975" containerID="52fdd9ecc8c306909f0bd4c366812eab2f3bd4b70dbc1b351fc40a582428870f" exitCode=0 Feb 18 19:41:56 crc kubenswrapper[5007]: I0218 19:41:56.819858 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4wz9" event={"ID":"8c778c82-e9ab-40e7-b084-07195587a975","Type":"ContainerDied","Data":"52fdd9ecc8c306909f0bd4c366812eab2f3bd4b70dbc1b351fc40a582428870f"} Feb 18 19:41:57 crc kubenswrapper[5007]: I0218 19:41:57.258304 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cjq4g" Feb 18 19:41:57 crc kubenswrapper[5007]: I0218 19:41:57.825651 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4wz9" event={"ID":"8c778c82-e9ab-40e7-b084-07195587a975","Type":"ContainerStarted","Data":"8495bad923e38e3a7c559fe87ea588828a334bae0911d3fae3c5b827f6e1cb45"} Feb 18 19:41:57 crc kubenswrapper[5007]: I0218 19:41:57.846482 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k4wz9" podStartSLOduration=2.762703044 podStartE2EDuration="51.846465962s" podCreationTimestamp="2026-02-18 19:41:06 +0000 UTC" firstStartedPulling="2026-02-18 19:41:08.265733419 +0000 UTC m=+153.047852943" lastFinishedPulling="2026-02-18 19:41:57.349496317 +0000 UTC m=+202.131615861" observedRunningTime="2026-02-18 19:41:57.843903571 +0000 UTC m=+202.626023105" watchObservedRunningTime="2026-02-18 19:41:57.846465962 +0000 UTC m=+202.628585486" Feb 18 19:41:59 crc kubenswrapper[5007]: I0218 19:41:59.838123 5007 generic.go:334] "Generic (PLEG): container finished" podID="057894ef-e113-4b3d-89d7-178aec98eae9" containerID="298fed900eab6b1aa70d403c088f2c06619ba40785cac5406f97334f2427f02c" exitCode=0 Feb 18 19:41:59 crc kubenswrapper[5007]: I0218 19:41:59.838199 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smq9b" event={"ID":"057894ef-e113-4b3d-89d7-178aec98eae9","Type":"ContainerDied","Data":"298fed900eab6b1aa70d403c088f2c06619ba40785cac5406f97334f2427f02c"} Feb 18 19:42:00 crc kubenswrapper[5007]: I0218 19:42:00.847826 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smq9b" event={"ID":"057894ef-e113-4b3d-89d7-178aec98eae9","Type":"ContainerStarted","Data":"5b11bdabb1cca08a5cba618bd799b38246df05fded064dd9284266fea4794b0b"} Feb 18 19:42:00 crc kubenswrapper[5007]: I0218 19:42:00.928217 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-smq9b" podStartSLOduration=2.964655903 podStartE2EDuration="54.92819426s" podCreationTimestamp="2026-02-18 19:41:06 +0000 UTC" firstStartedPulling="2026-02-18 19:41:08.268651825 +0000 UTC m=+153.050771349" lastFinishedPulling="2026-02-18 19:42:00.232190182 +0000 UTC m=+205.014309706" observedRunningTime="2026-02-18 19:42:00.866835991 +0000 UTC m=+205.648955545" watchObservedRunningTime="2026-02-18 19:42:00.92819426 +0000 UTC m=+205.710313784" Feb 18 19:42:01 crc kubenswrapper[5007]: I0218 19:42:01.856293 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdh7g" event={"ID":"3504cb19-2058-4787-b5fb-c0b64aa2322f","Type":"ContainerStarted","Data":"6835ee4472ba0edbe69b8a44430dbfe2d752d97e0454970e3839e40b38408f33"} Feb 18 19:42:03 crc kubenswrapper[5007]: I0218 19:42:03.868353 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg74r" event={"ID":"438564b4-5423-4e65-b05b-054043ad93c6","Type":"ContainerStarted","Data":"14680f08844fcd8db202e94277fd6643744b899bb6d7ac2a6ed23b724918d216"} Feb 18 19:42:03 crc kubenswrapper[5007]: I0218 19:42:03.871979 5007 generic.go:334] "Generic (PLEG): container finished" podID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerID="6835ee4472ba0edbe69b8a44430dbfe2d752d97e0454970e3839e40b38408f33" exitCode=0 Feb 18 19:42:03 crc kubenswrapper[5007]: I0218 19:42:03.872078 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdh7g" event={"ID":"3504cb19-2058-4787-b5fb-c0b64aa2322f","Type":"ContainerDied","Data":"6835ee4472ba0edbe69b8a44430dbfe2d752d97e0454970e3839e40b38408f33"} Feb 18 19:42:03 crc kubenswrapper[5007]: I0218 19:42:03.875715 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8ph5" event={"ID":"84dca2c7-0ed9-4824-8431-88a26864b81c","Type":"ContainerStarted","Data":"b52277c81287a57c5c81b0e9418f4e8ce57cbc1e817ed890395352b261bbe494"} Feb 18 19:42:04 crc kubenswrapper[5007]: I0218 19:42:04.883530 5007 generic.go:334] "Generic (PLEG): container finished" podID="438564b4-5423-4e65-b05b-054043ad93c6" containerID="14680f08844fcd8db202e94277fd6643744b899bb6d7ac2a6ed23b724918d216" exitCode=0 Feb 18 19:42:04 crc kubenswrapper[5007]: I0218 19:42:04.883611 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg74r" event={"ID":"438564b4-5423-4e65-b05b-054043ad93c6","Type":"ContainerDied","Data":"14680f08844fcd8db202e94277fd6643744b899bb6d7ac2a6ed23b724918d216"} Feb 18 19:42:04 crc kubenswrapper[5007]: I0218 19:42:04.885546 5007 generic.go:334] "Generic (PLEG): container finished" podID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerID="b52277c81287a57c5c81b0e9418f4e8ce57cbc1e817ed890395352b261bbe494" exitCode=0 Feb 18 19:42:04 crc kubenswrapper[5007]: I0218 19:42:04.885577 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8ph5" event={"ID":"84dca2c7-0ed9-4824-8431-88a26864b81c","Type":"ContainerDied","Data":"b52277c81287a57c5c81b0e9418f4e8ce57cbc1e817ed890395352b261bbe494"} Feb 18 19:42:06 crc kubenswrapper[5007]: I0218 19:42:06.951591 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:42:06 crc kubenswrapper[5007]: I0218 19:42:06.952838 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:42:07 crc kubenswrapper[5007]: I0218 19:42:07.032739 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:42:07 crc kubenswrapper[5007]: I0218 19:42:07.033397 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:42:07 crc kubenswrapper[5007]: I0218 19:42:07.952004 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:42:07 crc kubenswrapper[5007]: I0218 19:42:07.952882 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:42:08 crc kubenswrapper[5007]: I0218 19:42:08.950349 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:42:08 crc kubenswrapper[5007]: I0218 19:42:08.959335 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:42:11 crc kubenswrapper[5007]: I0218 19:42:11.266240 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smq9b"] Feb 18 19:42:11 crc kubenswrapper[5007]: I0218 19:42:11.266739 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-smq9b" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" containerName="registry-server" containerID="cri-o://5b11bdabb1cca08a5cba618bd799b38246df05fded064dd9284266fea4794b0b" gracePeriod=2 Feb 18 19:42:12 crc kubenswrapper[5007]: I0218 19:42:12.927885 5007 generic.go:334] "Generic (PLEG): container finished" podID="057894ef-e113-4b3d-89d7-178aec98eae9" containerID="5b11bdabb1cca08a5cba618bd799b38246df05fded064dd9284266fea4794b0b" exitCode=0 Feb 18 19:42:12 crc kubenswrapper[5007]: I0218 19:42:12.927965 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smq9b" event={"ID":"057894ef-e113-4b3d-89d7-178aec98eae9","Type":"ContainerDied","Data":"5b11bdabb1cca08a5cba618bd799b38246df05fded064dd9284266fea4794b0b"} Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.430403 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.590783 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nblr8\" (UniqueName: \"kubernetes.io/projected/057894ef-e113-4b3d-89d7-178aec98eae9-kube-api-access-nblr8\") pod \"057894ef-e113-4b3d-89d7-178aec98eae9\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.591000 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-utilities\") pod \"057894ef-e113-4b3d-89d7-178aec98eae9\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.591029 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-catalog-content\") pod \"057894ef-e113-4b3d-89d7-178aec98eae9\" (UID: \"057894ef-e113-4b3d-89d7-178aec98eae9\") " Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.591793 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-utilities" (OuterVolumeSpecName: "utilities") pod "057894ef-e113-4b3d-89d7-178aec98eae9" (UID: "057894ef-e113-4b3d-89d7-178aec98eae9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.602017 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057894ef-e113-4b3d-89d7-178aec98eae9-kube-api-access-nblr8" (OuterVolumeSpecName: "kube-api-access-nblr8") pod "057894ef-e113-4b3d-89d7-178aec98eae9" (UID: "057894ef-e113-4b3d-89d7-178aec98eae9"). InnerVolumeSpecName "kube-api-access-nblr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.645263 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "057894ef-e113-4b3d-89d7-178aec98eae9" (UID: "057894ef-e113-4b3d-89d7-178aec98eae9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.692290 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.692330 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nblr8\" (UniqueName: \"kubernetes.io/projected/057894ef-e113-4b3d-89d7-178aec98eae9-kube-api-access-nblr8\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.692343 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/057894ef-e113-4b3d-89d7-178aec98eae9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.940991 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk22p" event={"ID":"303d7555-90c4-459c-9dcf-181f93d89ed8","Type":"ContainerStarted","Data":"d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56"} Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.951178 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45xrd" event={"ID":"a403818b-8748-413d-80e7-cc5217779236","Type":"ContainerStarted","Data":"f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a"} Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.962484 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdh7g" event={"ID":"3504cb19-2058-4787-b5fb-c0b64aa2322f","Type":"ContainerStarted","Data":"91c1c965f121023a9a7594c8932fc8de18b29bebfd508d87d155df90b05a7be4"} Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.965579 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8ph5" event={"ID":"84dca2c7-0ed9-4824-8431-88a26864b81c","Type":"ContainerStarted","Data":"3455a98e40d89562e40e6e94ba15d161b0158d0cd6a9fbe929a8cc5a96dcdd33"} Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.976585 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vddlz" event={"ID":"9cbfc029-9e57-4f39-942c-47f3c8f866b2","Type":"ContainerStarted","Data":"8f16989626bc6873aa48c6ad2ac8ae6b9b4fb11823990438e36061ce46d9543a"} Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.990107 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smq9b" event={"ID":"057894ef-e113-4b3d-89d7-178aec98eae9","Type":"ContainerDied","Data":"b9337febda48ef164c1b95dd035ecedb36439d2b2e5340ea3e180731108218cc"} Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.990173 5007 scope.go:117] "RemoveContainer" containerID="5b11bdabb1cca08a5cba618bd799b38246df05fded064dd9284266fea4794b0b" Feb 18 19:42:13 crc kubenswrapper[5007]: I0218 19:42:13.990311 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smq9b" Feb 18 19:42:14 crc kubenswrapper[5007]: I0218 19:42:14.003384 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg74r" event={"ID":"438564b4-5423-4e65-b05b-054043ad93c6","Type":"ContainerStarted","Data":"475e8d9d81ec86ccd8fa63a550a3f4174996acc26ecd64e7fc970eeed48a9fe7"} Feb 18 19:42:14 crc kubenswrapper[5007]: I0218 19:42:14.010393 5007 scope.go:117] "RemoveContainer" containerID="298fed900eab6b1aa70d403c088f2c06619ba40785cac5406f97334f2427f02c" Feb 18 19:42:14 crc kubenswrapper[5007]: I0218 19:42:14.033003 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z8ph5" podStartSLOduration=3.890111236 podStartE2EDuration="1m8.032973789s" podCreationTimestamp="2026-02-18 19:41:06 +0000 UTC" firstStartedPulling="2026-02-18 19:41:08.279782723 +0000 UTC m=+153.061902247" lastFinishedPulling="2026-02-18 19:42:12.422645276 +0000 UTC m=+217.204764800" observedRunningTime="2026-02-18 19:42:14.030400388 +0000 UTC m=+218.812519922" watchObservedRunningTime="2026-02-18 19:42:14.032973789 +0000 UTC m=+218.815093313" Feb 18 19:42:14 crc kubenswrapper[5007]: I0218 19:42:14.061333 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mdh7g" podStartSLOduration=3.249334453 podStartE2EDuration="1m5.061314214s" podCreationTimestamp="2026-02-18 19:41:09 +0000 UTC" firstStartedPulling="2026-02-18 19:41:11.390335501 +0000 UTC m=+156.172455025" lastFinishedPulling="2026-02-18 19:42:13.202315262 +0000 UTC m=+217.984434786" observedRunningTime="2026-02-18 19:42:14.055664027 +0000 UTC m=+218.837783561" watchObservedRunningTime="2026-02-18 19:42:14.061314214 +0000 UTC m=+218.843433748" Feb 18 19:42:14 crc kubenswrapper[5007]: I0218 19:42:14.074385 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smq9b"] Feb 18 19:42:14 crc kubenswrapper[5007]: I0218 19:42:14.085492 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-smq9b"] Feb 18 19:42:14 crc kubenswrapper[5007]: I0218 19:42:14.156356 5007 scope.go:117] "RemoveContainer" containerID="bcda3bb14031c1099a5305a61a1187551070d472b5666d01fa5e680779cc3f61" Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.011136 5007 generic.go:334] "Generic (PLEG): container finished" podID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerID="8f16989626bc6873aa48c6ad2ac8ae6b9b4fb11823990438e36061ce46d9543a" exitCode=0 Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.011239 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vddlz" event={"ID":"9cbfc029-9e57-4f39-942c-47f3c8f866b2","Type":"ContainerDied","Data":"8f16989626bc6873aa48c6ad2ac8ae6b9b4fb11823990438e36061ce46d9543a"} Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.023696 5007 generic.go:334] "Generic (PLEG): container finished" podID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerID="d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56" exitCode=0 Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.023938 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk22p" event={"ID":"303d7555-90c4-459c-9dcf-181f93d89ed8","Type":"ContainerDied","Data":"d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56"} Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.032171 5007 generic.go:334] "Generic (PLEG): container finished" podID="a403818b-8748-413d-80e7-cc5217779236" containerID="f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a" exitCode=0 Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.032263 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45xrd" event={"ID":"a403818b-8748-413d-80e7-cc5217779236","Type":"ContainerDied","Data":"f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a"} Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.046057 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kg74r" podStartSLOduration=4.059714773 podStartE2EDuration="1m7.046032849s" podCreationTimestamp="2026-02-18 19:41:08 +0000 UTC" firstStartedPulling="2026-02-18 19:41:10.332074781 +0000 UTC m=+155.114194305" lastFinishedPulling="2026-02-18 19:42:13.318392857 +0000 UTC m=+218.100512381" observedRunningTime="2026-02-18 19:42:14.114836946 +0000 UTC m=+218.896956470" watchObservedRunningTime="2026-02-18 19:42:15.046032849 +0000 UTC m=+219.828152383" Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.665013 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.665095 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.665163 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.666071 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.666153 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec" gracePeriod=600 Feb 18 19:42:15 crc kubenswrapper[5007]: I0218 19:42:15.916168 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" path="/var/lib/kubelet/pods/057894ef-e113-4b3d-89d7-178aec98eae9/volumes" Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.049905 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec" exitCode=0 Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.049976 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec"} Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.050011 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"21a62a8c05493231db44d9e0f17c6cd72f955097074b39625fb84dd10a0c498d"} Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.054159 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vddlz" event={"ID":"9cbfc029-9e57-4f39-942c-47f3c8f866b2","Type":"ContainerStarted","Data":"bdca2c74852b3c989be6c7ee84b8ba2c4af88a9c34506c01e73856f7e0bb5db8"} Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.058147 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk22p" event={"ID":"303d7555-90c4-459c-9dcf-181f93d89ed8","Type":"ContainerStarted","Data":"8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62"} Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.060440 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45xrd" event={"ID":"a403818b-8748-413d-80e7-cc5217779236","Type":"ContainerStarted","Data":"347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16"} Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.108091 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vddlz" podStartSLOduration=2.986899318 podStartE2EDuration="1m8.108065476s" podCreationTimestamp="2026-02-18 19:41:08 +0000 UTC" firstStartedPulling="2026-02-18 19:41:10.331520245 +0000 UTC m=+155.113639769" lastFinishedPulling="2026-02-18 19:42:15.452686403 +0000 UTC m=+220.234805927" observedRunningTime="2026-02-18 19:42:16.096450284 +0000 UTC m=+220.878569828" watchObservedRunningTime="2026-02-18 19:42:16.108065476 +0000 UTC m=+220.890185000" Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.119206 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-45xrd" podStartSLOduration=2.915565377 podStartE2EDuration="1m10.119188494s" podCreationTimestamp="2026-02-18 19:41:06 +0000 UTC" firstStartedPulling="2026-02-18 19:41:08.276569628 +0000 UTC m=+153.058689142" lastFinishedPulling="2026-02-18 19:42:15.480192735 +0000 UTC m=+220.262312259" observedRunningTime="2026-02-18 19:42:16.115335377 +0000 UTC m=+220.897454921" watchObservedRunningTime="2026-02-18 19:42:16.119188494 +0000 UTC m=+220.901308018" Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.152130 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qk22p" podStartSLOduration=3.118877512 podStartE2EDuration="1m7.152105856s" podCreationTimestamp="2026-02-18 19:41:09 +0000 UTC" firstStartedPulling="2026-02-18 19:41:11.379494302 +0000 UTC m=+156.161613826" lastFinishedPulling="2026-02-18 19:42:15.412722646 +0000 UTC m=+220.194842170" observedRunningTime="2026-02-18 19:42:16.14938257 +0000 UTC m=+220.931502124" watchObservedRunningTime="2026-02-18 19:42:16.152105856 +0000 UTC m=+220.934225400" Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.596232 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.596383 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:42:16 crc kubenswrapper[5007]: I0218 19:42:16.659072 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:42:17 crc kubenswrapper[5007]: I0218 19:42:17.178149 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:42:17 crc kubenswrapper[5007]: I0218 19:42:17.179816 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:42:18 crc kubenswrapper[5007]: I0218 19:42:18.120445 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:42:18 crc kubenswrapper[5007]: I0218 19:42:18.222090 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-45xrd" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="registry-server" probeResult="failure" output=< Feb 18 19:42:18 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 19:42:18 crc kubenswrapper[5007]: > Feb 18 19:42:18 crc kubenswrapper[5007]: I0218 19:42:18.555030 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:42:18 crc kubenswrapper[5007]: I0218 19:42:18.555912 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:42:18 crc kubenswrapper[5007]: I0218 19:42:18.601370 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:42:18 crc kubenswrapper[5007]: I0218 19:42:18.949599 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:42:18 crc kubenswrapper[5007]: I0218 19:42:18.949812 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:42:18 crc kubenswrapper[5007]: I0218 19:42:18.998338 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:42:19 crc kubenswrapper[5007]: I0218 19:42:19.144641 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:42:19 crc kubenswrapper[5007]: I0218 19:42:19.743491 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:42:19 crc kubenswrapper[5007]: I0218 19:42:19.743866 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:42:20 crc kubenswrapper[5007]: I0218 19:42:20.129503 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:42:20 crc kubenswrapper[5007]: I0218 19:42:20.167818 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:42:20 crc kubenswrapper[5007]: I0218 19:42:20.167873 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:42:20 crc kubenswrapper[5007]: I0218 19:42:20.796477 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mdh7g" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="registry-server" probeResult="failure" output=< Feb 18 19:42:20 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 19:42:20 crc kubenswrapper[5007]: > Feb 18 19:42:21 crc kubenswrapper[5007]: I0218 19:42:21.204548 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qk22p" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="registry-server" probeResult="failure" output=< Feb 18 19:42:21 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 19:42:21 crc kubenswrapper[5007]: > Feb 18 19:42:21 crc kubenswrapper[5007]: I0218 19:42:21.670619 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg74r"] Feb 18 19:42:21 crc kubenswrapper[5007]: I0218 19:42:21.671003 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kg74r" podUID="438564b4-5423-4e65-b05b-054043ad93c6" containerName="registry-server" containerID="cri-o://475e8d9d81ec86ccd8fa63a550a3f4174996acc26ecd64e7fc970eeed48a9fe7" gracePeriod=2 Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.097280 5007 generic.go:334] "Generic (PLEG): container finished" podID="438564b4-5423-4e65-b05b-054043ad93c6" containerID="475e8d9d81ec86ccd8fa63a550a3f4174996acc26ecd64e7fc970eeed48a9fe7" exitCode=0 Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.097412 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg74r" event={"ID":"438564b4-5423-4e65-b05b-054043ad93c6","Type":"ContainerDied","Data":"475e8d9d81ec86ccd8fa63a550a3f4174996acc26ecd64e7fc970eeed48a9fe7"} Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.097847 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg74r" event={"ID":"438564b4-5423-4e65-b05b-054043ad93c6","Type":"ContainerDied","Data":"c7a572fa5231a337706a0f3e7c627aa59bca14eb54786c3ab84066d1b7478779"} Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.097872 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a572fa5231a337706a0f3e7c627aa59bca14eb54786c3ab84066d1b7478779" Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.133286 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.210983 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-catalog-content\") pod \"438564b4-5423-4e65-b05b-054043ad93c6\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.211038 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m92l7\" (UniqueName: \"kubernetes.io/projected/438564b4-5423-4e65-b05b-054043ad93c6-kube-api-access-m92l7\") pod \"438564b4-5423-4e65-b05b-054043ad93c6\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.223654 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438564b4-5423-4e65-b05b-054043ad93c6-kube-api-access-m92l7" (OuterVolumeSpecName: "kube-api-access-m92l7") pod "438564b4-5423-4e65-b05b-054043ad93c6" (UID: "438564b4-5423-4e65-b05b-054043ad93c6"). InnerVolumeSpecName "kube-api-access-m92l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.248649 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "438564b4-5423-4e65-b05b-054043ad93c6" (UID: "438564b4-5423-4e65-b05b-054043ad93c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.312489 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-utilities\") pod \"438564b4-5423-4e65-b05b-054043ad93c6\" (UID: \"438564b4-5423-4e65-b05b-054043ad93c6\") " Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.313039 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m92l7\" (UniqueName: \"kubernetes.io/projected/438564b4-5423-4e65-b05b-054043ad93c6-kube-api-access-m92l7\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.313065 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.313335 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-utilities" (OuterVolumeSpecName: "utilities") pod "438564b4-5423-4e65-b05b-054043ad93c6" (UID: "438564b4-5423-4e65-b05b-054043ad93c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:22 crc kubenswrapper[5007]: I0218 19:42:22.413490 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438564b4-5423-4e65-b05b-054043ad93c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:23 crc kubenswrapper[5007]: I0218 19:42:23.101924 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg74r" Feb 18 19:42:23 crc kubenswrapper[5007]: I0218 19:42:23.133115 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg74r"] Feb 18 19:42:23 crc kubenswrapper[5007]: I0218 19:42:23.136604 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg74r"] Feb 18 19:42:23 crc kubenswrapper[5007]: I0218 19:42:23.916124 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438564b4-5423-4e65-b05b-054043ad93c6" path="/var/lib/kubelet/pods/438564b4-5423-4e65-b05b-054043ad93c6/volumes" Feb 18 19:42:26 crc kubenswrapper[5007]: I0218 19:42:26.581053 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b2xj7"] Feb 18 19:42:27 crc kubenswrapper[5007]: I0218 19:42:27.221386 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:42:27 crc kubenswrapper[5007]: I0218 19:42:27.263998 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.469118 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-45xrd"] Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.469882 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-45xrd" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="registry-server" containerID="cri-o://347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16" gracePeriod=2 Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.785453 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.832916 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.844166 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.929892 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-catalog-content\") pod \"a403818b-8748-413d-80e7-cc5217779236\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.929951 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-879b6\" (UniqueName: \"kubernetes.io/projected/a403818b-8748-413d-80e7-cc5217779236-kube-api-access-879b6\") pod \"a403818b-8748-413d-80e7-cc5217779236\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.930096 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-utilities\") pod \"a403818b-8748-413d-80e7-cc5217779236\" (UID: \"a403818b-8748-413d-80e7-cc5217779236\") " Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.930885 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-utilities" (OuterVolumeSpecName: "utilities") pod "a403818b-8748-413d-80e7-cc5217779236" (UID: "a403818b-8748-413d-80e7-cc5217779236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.935701 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a403818b-8748-413d-80e7-cc5217779236-kube-api-access-879b6" (OuterVolumeSpecName: "kube-api-access-879b6") pod "a403818b-8748-413d-80e7-cc5217779236" (UID: "a403818b-8748-413d-80e7-cc5217779236"). InnerVolumeSpecName "kube-api-access-879b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:29 crc kubenswrapper[5007]: I0218 19:42:29.981212 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a403818b-8748-413d-80e7-cc5217779236" (UID: "a403818b-8748-413d-80e7-cc5217779236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.032781 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.032815 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a403818b-8748-413d-80e7-cc5217779236-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.032826 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-879b6\" (UniqueName: \"kubernetes.io/projected/a403818b-8748-413d-80e7-cc5217779236-kube-api-access-879b6\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.149388 5007 generic.go:334] "Generic (PLEG): container finished" podID="a403818b-8748-413d-80e7-cc5217779236" containerID="347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16" exitCode=0 Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.149480 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45xrd" event={"ID":"a403818b-8748-413d-80e7-cc5217779236","Type":"ContainerDied","Data":"347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16"} Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.149562 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45xrd" event={"ID":"a403818b-8748-413d-80e7-cc5217779236","Type":"ContainerDied","Data":"b38a995e1c088172278435dafc7cf20a8960e61c8976315c79dbe8940aa8bf15"} Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.149585 5007 scope.go:117] "RemoveContainer" containerID="347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.149580 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45xrd" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.169348 5007 scope.go:117] "RemoveContainer" containerID="f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.190410 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-45xrd"] Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.194520 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-45xrd"] Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.195146 5007 scope.go:117] "RemoveContainer" containerID="fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.215696 5007 scope.go:117] "RemoveContainer" containerID="347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16" Feb 18 19:42:30 crc kubenswrapper[5007]: E0218 19:42:30.217440 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16\": container with ID starting with 347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16 not found: ID does not exist" containerID="347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.217502 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16"} err="failed to get container status \"347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16\": rpc error: code = NotFound desc = could not find container \"347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16\": container with ID starting with 347aa907f06bf57d86efc0ee9a57cbc50bc067c0b8f47e4cbb6e82fa6107ca16 not found: ID does not exist" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.217545 5007 scope.go:117] "RemoveContainer" containerID="f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a" Feb 18 19:42:30 crc kubenswrapper[5007]: E0218 19:42:30.218035 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a\": container with ID starting with f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a not found: ID does not exist" containerID="f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.218045 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.218061 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a"} err="failed to get container status \"f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a\": rpc error: code = NotFound desc = could not find container \"f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a\": container with ID starting with f8313f97eee1f77ebacb0d8c56fbf035a3fe20529e00af0f5a7947057447ea4a not found: ID does not exist" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.218078 5007 scope.go:117] "RemoveContainer" containerID="fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584" Feb 18 19:42:30 crc kubenswrapper[5007]: E0218 19:42:30.218417 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584\": container with ID starting with fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584 not found: ID does not exist" containerID="fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.218467 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584"} err="failed to get container status \"fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584\": rpc error: code = NotFound desc = could not find container \"fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584\": container with ID starting with fd305a191ca36eb6daeb45824ff65c827a7b959ed14f892a0f4d4e84e714d584 not found: ID does not exist" Feb 18 19:42:30 crc kubenswrapper[5007]: I0218 19:42:30.260847 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:42:31 crc kubenswrapper[5007]: I0218 19:42:31.925383 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a403818b-8748-413d-80e7-cc5217779236" path="/var/lib/kubelet/pods/a403818b-8748-413d-80e7-cc5217779236/volumes" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.068617 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qk22p"] Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.163975 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qk22p" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="registry-server" containerID="cri-o://8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62" gracePeriod=2 Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.603627 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.774526 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-catalog-content\") pod \"303d7555-90c4-459c-9dcf-181f93d89ed8\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.774725 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjsx5\" (UniqueName: \"kubernetes.io/projected/303d7555-90c4-459c-9dcf-181f93d89ed8-kube-api-access-jjsx5\") pod \"303d7555-90c4-459c-9dcf-181f93d89ed8\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.774780 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-utilities\") pod \"303d7555-90c4-459c-9dcf-181f93d89ed8\" (UID: \"303d7555-90c4-459c-9dcf-181f93d89ed8\") " Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.775665 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-utilities" (OuterVolumeSpecName: "utilities") pod "303d7555-90c4-459c-9dcf-181f93d89ed8" (UID: "303d7555-90c4-459c-9dcf-181f93d89ed8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.779317 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303d7555-90c4-459c-9dcf-181f93d89ed8-kube-api-access-jjsx5" (OuterVolumeSpecName: "kube-api-access-jjsx5") pod "303d7555-90c4-459c-9dcf-181f93d89ed8" (UID: "303d7555-90c4-459c-9dcf-181f93d89ed8"). InnerVolumeSpecName "kube-api-access-jjsx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.876313 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjsx5\" (UniqueName: \"kubernetes.io/projected/303d7555-90c4-459c-9dcf-181f93d89ed8-kube-api-access-jjsx5\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.876364 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.895297 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "303d7555-90c4-459c-9dcf-181f93d89ed8" (UID: "303d7555-90c4-459c-9dcf-181f93d89ed8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968142 5007 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968416 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968448 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968459 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="extract-content" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968467 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="extract-content" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968475 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="extract-content" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968481 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="extract-content" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968487 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968493 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968503 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438564b4-5423-4e65-b05b-054043ad93c6" containerName="extract-content" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968509 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="438564b4-5423-4e65-b05b-054043ad93c6" containerName="extract-content" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968541 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438564b4-5423-4e65-b05b-054043ad93c6" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968551 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="438564b4-5423-4e65-b05b-054043ad93c6" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968559 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" containerName="extract-utilities" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968565 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" containerName="extract-utilities" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968576 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="extract-utilities" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968583 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="extract-utilities" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968594 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="extract-utilities" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968602 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="extract-utilities" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968612 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" containerName="extract-content" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968619 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" containerName="extract-content" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968627 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438564b4-5423-4e65-b05b-054043ad93c6" containerName="extract-utilities" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968633 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="438564b4-5423-4e65-b05b-054043ad93c6" containerName="extract-utilities" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.968642 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968649 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968756 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968769 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a403818b-8748-413d-80e7-cc5217779236" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968781 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="438564b4-5423-4e65-b05b-054043ad93c6" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.968792 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="057894ef-e113-4b3d-89d7-178aec98eae9" containerName="registry-server" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.969171 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.970879 5007 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.971571 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55" gracePeriod=15 Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.971610 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123" gracePeriod=15 Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.971510 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf" gracePeriod=15 Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.971625 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97" gracePeriod=15 Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.971718 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b" gracePeriod=15 Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.974144 5007 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.974604 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.974634 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.974651 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.974681 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.974703 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.974717 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.974732 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.974744 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.974807 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.974825 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.974843 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.974896 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:42:32 crc kubenswrapper[5007]: E0218 19:42:32.974915 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.974928 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.975357 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.975393 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.975411 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.975472 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.975497 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.975931 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:42:32 crc kubenswrapper[5007]: I0218 19:42:32.979386 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303d7555-90c4-459c-9dcf-181f93d89ed8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.040010 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.081333 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.081705 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.081732 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.082911 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.082954 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.083074 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.083111 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.083143 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.173629 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.175239 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.176235 5007 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123" exitCode=0 Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.176280 5007 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97" exitCode=0 Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.176289 5007 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b" exitCode=0 Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.176299 5007 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55" exitCode=2 Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.176357 5007 scope.go:117] "RemoveContainer" containerID="c744195e733466c82a26d081f3be759280885287f424df48964c4d555f93a30c" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.178985 5007 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.179037 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.180532 5007 generic.go:334] "Generic (PLEG): container finished" podID="303d7555-90c4-459c-9dcf-181f93d89ed8" containerID="8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62" exitCode=0 Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.180595 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk22p" event={"ID":"303d7555-90c4-459c-9dcf-181f93d89ed8","Type":"ContainerDied","Data":"8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62"} Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.180638 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk22p" event={"ID":"303d7555-90c4-459c-9dcf-181f93d89ed8","Type":"ContainerDied","Data":"bd2e8a78bb79414719018c25e92332ff6b1b8b853e96ac74c3748232032272d6"} Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.180776 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qk22p" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.182898 5007 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.183182 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.183355 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184079 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184121 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184163 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184179 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184199 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184221 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184242 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184273 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184341 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184375 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184395 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184416 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184460 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184485 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184513 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.184533 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.232927 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.233914 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.235049 5007 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.241457 5007 scope.go:117] "RemoveContainer" containerID="8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.263158 5007 scope.go:117] "RemoveContainer" containerID="d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.286033 5007 scope.go:117] "RemoveContainer" containerID="080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.318386 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.324370 5007 scope.go:117] "RemoveContainer" containerID="8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62" Feb 18 19:42:33 crc kubenswrapper[5007]: E0218 19:42:33.324795 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62\": container with ID starting with 8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62 not found: ID does not exist" containerID="8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.324852 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62"} err="failed to get container status \"8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62\": rpc error: code = NotFound desc = could not find container \"8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62\": container with ID starting with 8dbeed18026cc2710b60bb68ade280d59da4f9b9508e8d5cf338a80df71afd62 not found: ID does not exist" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.324894 5007 scope.go:117] "RemoveContainer" containerID="d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56" Feb 18 19:42:33 crc kubenswrapper[5007]: E0218 19:42:33.325357 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56\": container with ID starting with d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56 not found: ID does not exist" containerID="d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.325383 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56"} err="failed to get container status \"d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56\": rpc error: code = NotFound desc = could not find container \"d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56\": container with ID starting with d0da55afa4359b26faa89e3496b4680e00d79d503537e9944693919f94fcdb56 not found: ID does not exist" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.325398 5007 scope.go:117] "RemoveContainer" containerID="080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1" Feb 18 19:42:33 crc kubenswrapper[5007]: E0218 19:42:33.325858 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1\": container with ID starting with 080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1 not found: ID does not exist" containerID="080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1" Feb 18 19:42:33 crc kubenswrapper[5007]: I0218 19:42:33.325896 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1"} err="failed to get container status \"080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1\": rpc error: code = NotFound desc = could not find container \"080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1\": container with ID starting with 080795640cd3510d59b1d7a68dc9205562125b72ca7ea4b2c95c1a105888ddb1 not found: ID does not exist" Feb 18 19:42:33 crc kubenswrapper[5007]: W0218 19:42:33.345553 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-885d41a73ee3682b1b426df5bfc21ba1980d59bd9b42bd8b779f98420319e914 WatchSource:0}: Error finding container 885d41a73ee3682b1b426df5bfc21ba1980d59bd9b42bd8b779f98420319e914: Status 404 returned error can't find the container with id 885d41a73ee3682b1b426df5bfc21ba1980d59bd9b42bd8b779f98420319e914 Feb 18 19:42:33 crc kubenswrapper[5007]: E0218 19:42:33.349541 5007 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956eb549e114b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:42:33.347781815 +0000 UTC m=+238.129901339,LastTimestamp:2026-02-18 19:42:33.347781815 +0000 UTC m=+238.129901339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.201345 5007 generic.go:334] "Generic (PLEG): container finished" podID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" containerID="3f222f2ebaf83cfb8bee01b1c5a9e3e66db930557772ddcdc4dc0d28a12c9cee" exitCode=0 Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.201495 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef8063d1-2e56-42bd-9310-993ac6e4d2b9","Type":"ContainerDied","Data":"3f222f2ebaf83cfb8bee01b1c5a9e3e66db930557772ddcdc4dc0d28a12c9cee"} Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.202849 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.203288 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89"} Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.203387 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"885d41a73ee3682b1b426df5bfc21ba1980d59bd9b42bd8b779f98420319e914"} Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.203316 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.204385 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.205031 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.205346 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.205647 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:34 crc kubenswrapper[5007]: I0218 19:42:34.207554 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.365936 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.367732 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.368747 5007 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.368998 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.369226 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.369481 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.481440 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.481921 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.482118 5007 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.482340 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.482610 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.530353 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.530466 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.530533 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.530864 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.530912 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.530933 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.631614 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kubelet-dir\") pod \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.631739 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ef8063d1-2e56-42bd-9310-993ac6e4d2b9" (UID: "ef8063d1-2e56-42bd-9310-993ac6e4d2b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.631836 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kube-api-access\") pod \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.631918 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-var-lock\") pod \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\" (UID: \"ef8063d1-2e56-42bd-9310-993ac6e4d2b9\") " Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.631974 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "ef8063d1-2e56-42bd-9310-993ac6e4d2b9" (UID: "ef8063d1-2e56-42bd-9310-993ac6e4d2b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.632286 5007 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.632306 5007 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.632333 5007 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.632342 5007 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.632350 5007 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.642602 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ef8063d1-2e56-42bd-9310-993ac6e4d2b9" (UID: "ef8063d1-2e56-42bd-9310-993ac6e4d2b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.737297 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef8063d1-2e56-42bd-9310-993ac6e4d2b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.913712 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.915414 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.915801 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.916013 5007 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:35 crc kubenswrapper[5007]: I0218 19:42:35.918107 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.223409 5007 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf" exitCode=0 Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.223499 5007 scope.go:117] "RemoveContainer" containerID="5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.223520 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.224451 5007 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.224902 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.225188 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.225662 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.227803 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef8063d1-2e56-42bd-9310-993ac6e4d2b9","Type":"ContainerDied","Data":"70410805717f9ff29843774518e6d18b8a6844faa6d5f25ad08db192499d4e0e"} Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.227854 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.227844 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70410805717f9ff29843774518e6d18b8a6844faa6d5f25ad08db192499d4e0e" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.228522 5007 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.228868 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.229230 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.229515 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.233217 5007 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.233632 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.233961 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.234350 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.241506 5007 scope.go:117] "RemoveContainer" containerID="23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.256236 5007 scope.go:117] "RemoveContainer" containerID="77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.270258 5007 scope.go:117] "RemoveContainer" containerID="37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.290467 5007 scope.go:117] "RemoveContainer" containerID="fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.313498 5007 scope.go:117] "RemoveContainer" containerID="3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.331525 5007 scope.go:117] "RemoveContainer" containerID="5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.333333 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\": container with ID starting with 5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123 not found: ID does not exist" containerID="5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.333385 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123"} err="failed to get container status \"5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\": rpc error: code = NotFound desc = could not find container \"5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123\": container with ID starting with 5485d7dc7a89d476200f376cfceca65997017cd2437d3cc3619f6c46facad123 not found: ID does not exist" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.333409 5007 scope.go:117] "RemoveContainer" containerID="23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.333765 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\": container with ID starting with 23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97 not found: ID does not exist" containerID="23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.333807 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97"} err="failed to get container status \"23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\": rpc error: code = NotFound desc = could not find container \"23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97\": container with ID starting with 23480805ff50003ee2e71aa5565da4763367a685e0ac0c6d74ec820f341f9d97 not found: ID does not exist" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.333838 5007 scope.go:117] "RemoveContainer" containerID="77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.334173 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\": container with ID starting with 77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b not found: ID does not exist" containerID="77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.334220 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b"} err="failed to get container status \"77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\": rpc error: code = NotFound desc = could not find container \"77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b\": container with ID starting with 77df8d14a63fde3064cff3ac6cca90c917b19755037f5b9aeb58ec16f6dc2c5b not found: ID does not exist" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.334236 5007 scope.go:117] "RemoveContainer" containerID="37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.334628 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\": container with ID starting with 37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55 not found: ID does not exist" containerID="37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.334675 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55"} err="failed to get container status \"37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\": rpc error: code = NotFound desc = could not find container \"37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55\": container with ID starting with 37be244bb88576badacefb43ca848b216e213d911f203ed939eb05c51e14da55 not found: ID does not exist" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.334689 5007 scope.go:117] "RemoveContainer" containerID="fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.334955 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\": container with ID starting with fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf not found: ID does not exist" containerID="fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.334975 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf"} err="failed to get container status \"fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\": rpc error: code = NotFound desc = could not find container \"fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf\": container with ID starting with fd3cb77cabaee5183c0b50d4fa2aeb328407204b9d516cf260b29776a5d914cf not found: ID does not exist" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.335012 5007 scope.go:117] "RemoveContainer" containerID="3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.336732 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\": container with ID starting with 3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208 not found: ID does not exist" containerID="3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208" Feb 18 19:42:36 crc kubenswrapper[5007]: I0218 19:42:36.336751 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208"} err="failed to get container status \"3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\": rpc error: code = NotFound desc = could not find container \"3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208\": container with ID starting with 3fe4037fe228a1859ff8b17bbd202fe5265755d218a3963aaf8a791808116208 not found: ID does not exist" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.817517 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:42:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:42:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:42:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:42:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.817736 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.817879 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.818019 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.818155 5007 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:36 crc kubenswrapper[5007]: E0218 19:42:36.818167 5007 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:42:37 crc kubenswrapper[5007]: E0218 19:42:37.529643 5007 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:37 crc kubenswrapper[5007]: E0218 19:42:37.529865 5007 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:37 crc kubenswrapper[5007]: E0218 19:42:37.530064 5007 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:37 crc kubenswrapper[5007]: E0218 19:42:37.530260 5007 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:37 crc kubenswrapper[5007]: E0218 19:42:37.530466 5007 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:37 crc kubenswrapper[5007]: I0218 19:42:37.530496 5007 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 19:42:37 crc kubenswrapper[5007]: E0218 19:42:37.530773 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Feb 18 19:42:37 crc kubenswrapper[5007]: E0218 19:42:37.731350 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Feb 18 19:42:38 crc kubenswrapper[5007]: E0218 19:42:38.133331 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Feb 18 19:42:38 crc kubenswrapper[5007]: E0218 19:42:38.758806 5007 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956eb549e114b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:42:33.347781815 +0000 UTC m=+238.129901339,LastTimestamp:2026-02-18 19:42:33.347781815 +0000 UTC m=+238.129901339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:42:38 crc kubenswrapper[5007]: E0218 19:42:38.935256 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Feb 18 19:42:40 crc kubenswrapper[5007]: E0218 19:42:40.536306 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Feb 18 19:42:43 crc kubenswrapper[5007]: E0218 19:42:43.737984 5007 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="6.4s" Feb 18 19:42:45 crc kubenswrapper[5007]: I0218 19:42:45.914775 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:45 crc kubenswrapper[5007]: I0218 19:42:45.916016 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:45 crc kubenswrapper[5007]: I0218 19:42:45.916561 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.310837 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.311567 5007 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873" exitCode=1 Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.311610 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873"} Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.312190 5007 scope.go:117] "RemoveContainer" containerID="1ae1f6761cc20bd5f33903f154f01ed6495e5baea943d5bf435cdc4dc330a873" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.312790 5007 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.313118 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.313373 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.313662 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.909251 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.910669 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.911265 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.911975 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.912624 5007 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.923768 5007 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.923845 5007 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:47 crc kubenswrapper[5007]: E0218 19:42:47.924286 5007 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.924837 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:47 crc kubenswrapper[5007]: W0218 19:42:47.941254 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-26a36f5026b1461d67bb95b43eebec9d4ca5bff125ff6085070f70ec54d617d9 WatchSource:0}: Error finding container 26a36f5026b1461d67bb95b43eebec9d4ca5bff125ff6085070f70ec54d617d9: Status 404 returned error can't find the container with id 26a36f5026b1461d67bb95b43eebec9d4ca5bff125ff6085070f70ec54d617d9 Feb 18 19:42:47 crc kubenswrapper[5007]: I0218 19:42:47.957565 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.320482 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.320850 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"53603a7f8acaa68d7520673480e60c609767d082a634ba3844cce469e0e3ec1c"} Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.321636 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.321817 5007 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322095 5007 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a0c30b0bd30e03d30e31f59e1e0414b055b21199018606e19a47effb62f94c27" exitCode=0 Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322086 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322140 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a0c30b0bd30e03d30e31f59e1e0414b055b21199018606e19a47effb62f94c27"} Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322217 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26a36f5026b1461d67bb95b43eebec9d4ca5bff125ff6085070f70ec54d617d9"} Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322464 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322654 5007 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322691 5007 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322762 5007 status_manager.go:851] "Failed to get status for pod" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.322987 5007 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:48 crc kubenswrapper[5007]: E0218 19:42:48.323154 5007 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.323260 5007 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:48 crc kubenswrapper[5007]: I0218 19:42:48.323550 5007 status_manager.go:851] "Failed to get status for pod" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" pod="openshift-marketplace/redhat-operators-qk22p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qk22p\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 18 19:42:49 crc kubenswrapper[5007]: I0218 19:42:49.355106 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a6ce0984c9d899849c1b5f877fdbe07e732ae865b1b791112dbd13222d615ddc"} Feb 18 19:42:49 crc kubenswrapper[5007]: I0218 19:42:49.355406 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6a66435d6300f75615751252c8b71404436a66380aa74ab0c822663286d26981"} Feb 18 19:42:49 crc kubenswrapper[5007]: I0218 19:42:49.355422 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78a5c6d9e771745c678691ec7820aeeba51cfc9d334272bf4f9325208823b4d6"} Feb 18 19:42:49 crc kubenswrapper[5007]: I0218 19:42:49.355450 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"efeb0e990b3559dbd048ef88257505aa2cb285c1218feb673a9498013f0655fa"} Feb 18 19:42:50 crc kubenswrapper[5007]: I0218 19:42:50.372719 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be6de11e1d57cd894f9c8333006b02a78d2636263aab1f386f856efa002a6e59"} Feb 18 19:42:50 crc kubenswrapper[5007]: I0218 19:42:50.373120 5007 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:50 crc kubenswrapper[5007]: I0218 19:42:50.373150 5007 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:50 crc kubenswrapper[5007]: I0218 19:42:50.373219 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:51 crc kubenswrapper[5007]: I0218 19:42:51.611897 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" podUID="9105100f-dce8-42a8-b182-945de577952d" containerName="oauth-openshift" containerID="cri-o://2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261" gracePeriod=15 Feb 18 19:42:51 crc kubenswrapper[5007]: I0218 19:42:51.982332 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081444 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-error\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081503 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9105100f-dce8-42a8-b182-945de577952d-audit-dir\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081541 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-serving-cert\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081579 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-ocp-branding-template\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081600 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-service-ca\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081632 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwkf2\" (UniqueName: \"kubernetes.io/projected/9105100f-dce8-42a8-b182-945de577952d-kube-api-access-gwkf2\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081657 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-router-certs\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081719 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-idp-0-file-data\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081745 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-provider-selection\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081773 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-cliconfig\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081809 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-login\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081827 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-trusted-ca-bundle\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081844 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-session\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081860 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-audit-policies\") pod \"9105100f-dce8-42a8-b182-945de577952d\" (UID: \"9105100f-dce8-42a8-b182-945de577952d\") " Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.081621 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9105100f-dce8-42a8-b182-945de577952d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.082498 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.082523 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.082868 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.083010 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.096721 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.099682 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9105100f-dce8-42a8-b182-945de577952d-kube-api-access-gwkf2" (OuterVolumeSpecName: "kube-api-access-gwkf2") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "kube-api-access-gwkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.106003 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.106040 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.125567 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.130815 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.132958 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.133710 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.138859 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9105100f-dce8-42a8-b182-945de577952d" (UID: "9105100f-dce8-42a8-b182-945de577952d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183164 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwkf2\" (UniqueName: \"kubernetes.io/projected/9105100f-dce8-42a8-b182-945de577952d-kube-api-access-gwkf2\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183218 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183229 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183243 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183254 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183263 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183299 5007 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183308 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183319 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183328 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183337 5007 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9105100f-dce8-42a8-b182-945de577952d-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183346 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183358 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.183367 5007 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9105100f-dce8-42a8-b182-945de577952d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.384882 5007 generic.go:334] "Generic (PLEG): container finished" podID="9105100f-dce8-42a8-b182-945de577952d" containerID="2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261" exitCode=0 Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.384925 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" event={"ID":"9105100f-dce8-42a8-b182-945de577952d","Type":"ContainerDied","Data":"2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261"} Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.384952 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" event={"ID":"9105100f-dce8-42a8-b182-945de577952d","Type":"ContainerDied","Data":"e56a44288d9379654467a19e095e8bb4fc0a17e7a3e0a2f9f518dab17e267882"} Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.384956 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b2xj7" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.384968 5007 scope.go:117] "RemoveContainer" containerID="2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.404309 5007 scope.go:117] "RemoveContainer" containerID="2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261" Feb 18 19:42:52 crc kubenswrapper[5007]: E0218 19:42:52.404779 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261\": container with ID starting with 2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261 not found: ID does not exist" containerID="2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.404842 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261"} err="failed to get container status \"2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261\": rpc error: code = NotFound desc = could not find container \"2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261\": container with ID starting with 2c9d7bfc574619b8f2438681aed565f353fd8b842dac9ddaa3d9f3134ae70261 not found: ID does not exist" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.925532 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.925612 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:52 crc kubenswrapper[5007]: I0218 19:42:52.934021 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:54 crc kubenswrapper[5007]: I0218 19:42:54.534730 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:42:55 crc kubenswrapper[5007]: I0218 19:42:55.383705 5007 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:55 crc kubenswrapper[5007]: I0218 19:42:55.404184 5007 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:55 crc kubenswrapper[5007]: I0218 19:42:55.404216 5007 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:55 crc kubenswrapper[5007]: I0218 19:42:55.407625 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:42:55 crc kubenswrapper[5007]: E0218 19:42:55.770499 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Feb 18 19:42:55 crc kubenswrapper[5007]: E0218 19:42:55.771073 5007 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Feb 18 19:42:55 crc kubenswrapper[5007]: I0218 19:42:55.927245 5007 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5b252693-1c93-4429-8492-92746087e533" Feb 18 19:42:56 crc kubenswrapper[5007]: I0218 19:42:56.409778 5007 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:56 crc kubenswrapper[5007]: I0218 19:42:56.410131 5007 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52555a16-10cb-4142-a2ca-b0a2fcc76e4e" Feb 18 19:42:56 crc kubenswrapper[5007]: I0218 19:42:56.413277 5007 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5b252693-1c93-4429-8492-92746087e533" Feb 18 19:42:57 crc kubenswrapper[5007]: I0218 19:42:57.958220 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:42:57 crc kubenswrapper[5007]: I0218 19:42:57.972958 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:42:58 crc kubenswrapper[5007]: I0218 19:42:58.428239 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:43:04 crc kubenswrapper[5007]: I0218 19:43:04.515197 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:43:04 crc kubenswrapper[5007]: I0218 19:43:04.814313 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:43:04 crc kubenswrapper[5007]: I0218 19:43:04.858656 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:43:05 crc kubenswrapper[5007]: I0218 19:43:05.239982 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:43:06 crc kubenswrapper[5007]: I0218 19:43:06.402217 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:43:06 crc kubenswrapper[5007]: I0218 19:43:06.599907 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:43:06 crc kubenswrapper[5007]: I0218 19:43:06.784253 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:43:06 crc kubenswrapper[5007]: I0218 19:43:06.862667 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:43:06 crc kubenswrapper[5007]: I0218 19:43:06.870547 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:43:06 crc kubenswrapper[5007]: I0218 19:43:06.930552 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:43:06 crc kubenswrapper[5007]: I0218 19:43:06.966790 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.035270 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.288862 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.553819 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.568156 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.661121 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.830420 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.848666 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.871209 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.895740 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.912833 5007 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.938812 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:43:07 crc kubenswrapper[5007]: I0218 19:43:07.996345 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.012095 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.191652 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.240421 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.335303 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.387496 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.709734 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.745646 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.863447 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:43:08 crc kubenswrapper[5007]: I0218 19:43:08.964804 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.016359 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.128080 5007 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.128886 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.128862707 podStartE2EDuration="36.128862707s" podCreationTimestamp="2026-02-18 19:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:42:55.402880693 +0000 UTC m=+260.185000217" watchObservedRunningTime="2026-02-18 19:43:09.128862707 +0000 UTC m=+273.910982231" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.133179 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-operators-qk22p","openshift-authentication/oauth-openshift-558db77b4-b2xj7"] Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.133228 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.138169 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.161073 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.161048936 podStartE2EDuration="14.161048936s" podCreationTimestamp="2026-02-18 19:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:43:09.155502244 +0000 UTC m=+273.937621778" watchObservedRunningTime="2026-02-18 19:43:09.161048936 +0000 UTC m=+273.943168470" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.221188 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.266628 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.291509 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.323705 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.405936 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.505384 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.532938 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.581688 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.582246 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.585694 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.689589 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.730379 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.905143 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.916902 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303d7555-90c4-459c-9dcf-181f93d89ed8" path="/var/lib/kubelet/pods/303d7555-90c4-459c-9dcf-181f93d89ed8/volumes" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.917690 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9105100f-dce8-42a8-b182-945de577952d" path="/var/lib/kubelet/pods/9105100f-dce8-42a8-b182-945de577952d/volumes" Feb 18 19:43:09 crc kubenswrapper[5007]: I0218 19:43:09.954395 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.013737 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.039959 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.043705 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.045380 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.115856 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.277130 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.385967 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.391337 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.497119 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.503325 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.525160 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.527328 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.568575 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.669413 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.674793 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.683851 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.695143 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.713941 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.740782 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.789972 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.814416 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.815205 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.877276 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.925257 5007 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:43:10 crc kubenswrapper[5007]: I0218 19:43:10.972878 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.043608 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.066910 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.147513 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.211612 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.219383 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.278861 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.373032 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.402130 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.506284 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.539856 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.545198 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.585918 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.701358 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.789045 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.900660 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.946672 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:43:11 crc kubenswrapper[5007]: I0218 19:43:11.948340 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.002890 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.071603 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.080017 5007 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.093096 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.137446 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.258628 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.369283 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.381642 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.486927 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.607311 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.815535 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.846226 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.876129 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.970547 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:43:12 crc kubenswrapper[5007]: I0218 19:43:12.981525 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.014931 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.085817 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.149589 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.229851 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.333778 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.344281 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.382814 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.389249 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.398986 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.462335 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.467656 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.580892 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.657233 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.710678 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.755974 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.787639 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.807771 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.938588 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.957835 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:43:13 crc kubenswrapper[5007]: I0218 19:43:13.975988 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.077327 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.172253 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.173016 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.180681 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.206654 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.324564 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.345466 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.431871 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.470352 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.529027 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.543981 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.545922 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.584105 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.639226 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.714125 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.718340 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.734921 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.863461 5007 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.905045 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:43:14 crc kubenswrapper[5007]: I0218 19:43:14.962186 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.030283 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.096904 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.137884 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.149800 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.178408 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.259386 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.334821 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.416165 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.440869 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.448622 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.532448 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.547596 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.565513 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.583985 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.781229 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.831641 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.917341 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:43:15 crc kubenswrapper[5007]: I0218 19:43:15.932114 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.033411 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.047305 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.058561 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.122832 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.314826 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.482233 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.602304 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.616309 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.676038 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.681786 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.734996 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.868639 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.898948 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.920010 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.979579 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:43:16 crc kubenswrapper[5007]: I0218 19:43:16.983821 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.057571 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.112741 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.146258 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.188296 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.246064 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.339711 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.342135 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.426151 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.468411 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.469629 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.553559 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.650339 5007 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.650824 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89" gracePeriod=5 Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.661805 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.785450 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.848112 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.859516 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.918473 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.918581 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.941719 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:43:17 crc kubenswrapper[5007]: I0218 19:43:17.948063 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.006542 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-xndbp"] Feb 18 19:43:18 crc kubenswrapper[5007]: E0218 19:43:18.006822 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" containerName="installer" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.006840 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" containerName="installer" Feb 18 19:43:18 crc kubenswrapper[5007]: E0218 19:43:18.006854 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.006864 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:43:18 crc kubenswrapper[5007]: E0218 19:43:18.006880 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9105100f-dce8-42a8-b182-945de577952d" containerName="oauth-openshift" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.006890 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="9105100f-dce8-42a8-b182-945de577952d" containerName="oauth-openshift" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.007028 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8063d1-2e56-42bd-9310-993ac6e4d2b9" containerName="installer" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.007049 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="9105100f-dce8-42a8-b182-945de577952d" containerName="oauth-openshift" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.007066 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.007590 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.009185 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.010300 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.010826 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.011125 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.011205 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.011263 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.011375 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.011479 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.011591 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.011741 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.012561 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.018211 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.020397 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.022606 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.042101 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.051459 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.057399 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-xndbp"] Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.154754 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.154817 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.154858 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.154903 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.154930 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.154964 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.154989 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.155048 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-audit-policies\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.155073 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.155112 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-audit-dir\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.155135 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.155183 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx8fq\" (UniqueName: \"kubernetes.io/projected/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-kube-api-access-xx8fq\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.155227 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.155250 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256288 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx8fq\" (UniqueName: \"kubernetes.io/projected/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-kube-api-access-xx8fq\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256358 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256388 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256445 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256470 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256490 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256522 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256545 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256576 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256597 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256633 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-audit-policies\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256659 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256703 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-audit-dir\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.256725 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.257514 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.257587 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-audit-dir\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.258023 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.258379 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-audit-policies\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.258794 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.262367 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.262577 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-session\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.262992 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-login\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.264000 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.265339 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-template-error\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.265800 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.266156 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.276953 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.289035 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx8fq\" (UniqueName: \"kubernetes.io/projected/75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf-kube-api-access-xx8fq\") pod \"oauth-openshift-6447dfb5d9-xndbp\" (UID: \"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf\") " pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.324029 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.327409 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.446325 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.502973 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.537809 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.554127 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.658063 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.659856 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.708555 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.778409 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6447dfb5d9-xndbp"] Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.787866 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:43:18 crc kubenswrapper[5007]: I0218 19:43:18.933832 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.103239 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.171001 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.289170 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.298396 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.374064 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.404230 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.406640 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.504832 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.567287 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" event={"ID":"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf","Type":"ContainerStarted","Data":"3cdcecfe499a8fa3ca99a3df982de5156e13df97fdea6d85d67098f836151dc5"} Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.567333 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" event={"ID":"75a45f85-a9f8-46eb-8eb2-4a569ce4f7bf","Type":"ContainerStarted","Data":"233ac8cf0a2b9e39ebfc1d5ae4cc14df1c276831d4f2d457c9700cde49062bdb"} Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.568977 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.576380 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.595167 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6447dfb5d9-xndbp" podStartSLOduration=53.595151818 podStartE2EDuration="53.595151818s" podCreationTimestamp="2026-02-18 19:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:43:19.591621911 +0000 UTC m=+284.373741445" watchObservedRunningTime="2026-02-18 19:43:19.595151818 +0000 UTC m=+284.377271342" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.628325 5007 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.646003 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:43:19 crc kubenswrapper[5007]: I0218 19:43:19.906415 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.136311 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.193328 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.255481 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.280028 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.315847 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.364096 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.389755 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.537697 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.559538 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.670357 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.745907 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:43:20 crc kubenswrapper[5007]: I0218 19:43:20.815664 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:43:21 crc kubenswrapper[5007]: I0218 19:43:21.458629 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:43:21 crc kubenswrapper[5007]: I0218 19:43:21.546133 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:43:21 crc kubenswrapper[5007]: I0218 19:43:21.813226 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.391313 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.391391 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.537635 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.538111 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.537860 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.538226 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.538656 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.538842 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.538933 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.538950 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.539080 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.539799 5007 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.539854 5007 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.539883 5007 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.539908 5007 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.551118 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.596460 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.596545 5007 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89" exitCode=137 Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.596607 5007 scope.go:117] "RemoveContainer" containerID="6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.596730 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.620883 5007 scope.go:117] "RemoveContainer" containerID="6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89" Feb 18 19:43:23 crc kubenswrapper[5007]: E0218 19:43:23.621675 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89\": container with ID starting with 6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89 not found: ID does not exist" containerID="6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.621745 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89"} err="failed to get container status \"6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89\": rpc error: code = NotFound desc = could not find container \"6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89\": container with ID starting with 6b37a5d301b180728cf9229ec036af613f6b04db2d684b45fe4e7adfbfc4fa89 not found: ID does not exist" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.641189 5007 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.923605 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.924174 5007 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.942408 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.942459 5007 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ee0070ab-e02e-49d0-b90c-cdb67b6b93eb" Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.950905 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:43:23 crc kubenswrapper[5007]: I0218 19:43:23.950998 5007 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ee0070ab-e02e-49d0-b90c-cdb67b6b93eb" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.208280 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4wz9"] Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.209205 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k4wz9" podUID="8c778c82-e9ab-40e7-b084-07195587a975" containerName="registry-server" containerID="cri-o://8495bad923e38e3a7c559fe87ea588828a334bae0911d3fae3c5b827f6e1cb45" gracePeriod=30 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.221580 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z8ph5"] Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.222090 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z8ph5" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerName="registry-server" containerID="cri-o://3455a98e40d89562e40e6e94ba15d161b0158d0cd6a9fbe929a8cc5a96dcdd33" gracePeriod=30 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.232917 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfxfg"] Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.233247 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" podUID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" containerName="marketplace-operator" containerID="cri-o://8888ea3b6a01fc2fc18fe155d9a37858fd327ea972185945b11ff020292178a5" gracePeriod=30 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.240273 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vddlz"] Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.242155 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vddlz" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerName="registry-server" containerID="cri-o://bdca2c74852b3c989be6c7ee84b8ba2c4af88a9c34506c01e73856f7e0bb5db8" gracePeriod=30 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.255085 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdh7g"] Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.255371 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mdh7g" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="registry-server" containerID="cri-o://91c1c965f121023a9a7594c8932fc8de18b29bebfd508d87d155df90b05a7be4" gracePeriod=30 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.266056 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jcq7x"] Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.266832 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.281845 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jcq7x"] Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.421643 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.422071 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.422116 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh44\" (UniqueName: \"kubernetes.io/projected/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-kube-api-access-4sh44\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.523915 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.523992 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.524049 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sh44\" (UniqueName: \"kubernetes.io/projected/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-kube-api-access-4sh44\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.526534 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.531223 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.546540 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sh44\" (UniqueName: \"kubernetes.io/projected/8dce8b0a-01a5-4ed3-a925-8050098d4ff5-kube-api-access-4sh44\") pod \"marketplace-operator-79b997595-jcq7x\" (UID: \"8dce8b0a-01a5-4ed3-a925-8050098d4ff5\") " pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.653795 5007 generic.go:334] "Generic (PLEG): container finished" podID="8c778c82-e9ab-40e7-b084-07195587a975" containerID="8495bad923e38e3a7c559fe87ea588828a334bae0911d3fae3c5b827f6e1cb45" exitCode=0 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.653859 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4wz9" event={"ID":"8c778c82-e9ab-40e7-b084-07195587a975","Type":"ContainerDied","Data":"8495bad923e38e3a7c559fe87ea588828a334bae0911d3fae3c5b827f6e1cb45"} Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.655940 5007 generic.go:334] "Generic (PLEG): container finished" podID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerID="bdca2c74852b3c989be6c7ee84b8ba2c4af88a9c34506c01e73856f7e0bb5db8" exitCode=0 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.655988 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vddlz" event={"ID":"9cbfc029-9e57-4f39-942c-47f3c8f866b2","Type":"ContainerDied","Data":"bdca2c74852b3c989be6c7ee84b8ba2c4af88a9c34506c01e73856f7e0bb5db8"} Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.657243 5007 generic.go:334] "Generic (PLEG): container finished" podID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" containerID="8888ea3b6a01fc2fc18fe155d9a37858fd327ea972185945b11ff020292178a5" exitCode=0 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.657286 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" event={"ID":"e7e8415f-6c51-452d-b0dc-13edd5475f6d","Type":"ContainerDied","Data":"8888ea3b6a01fc2fc18fe155d9a37858fd327ea972185945b11ff020292178a5"} Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.667974 5007 generic.go:334] "Generic (PLEG): container finished" podID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerID="91c1c965f121023a9a7594c8932fc8de18b29bebfd508d87d155df90b05a7be4" exitCode=0 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.668067 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdh7g" event={"ID":"3504cb19-2058-4787-b5fb-c0b64aa2322f","Type":"ContainerDied","Data":"91c1c965f121023a9a7594c8932fc8de18b29bebfd508d87d155df90b05a7be4"} Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.672362 5007 generic.go:334] "Generic (PLEG): container finished" podID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerID="3455a98e40d89562e40e6e94ba15d161b0158d0cd6a9fbe929a8cc5a96dcdd33" exitCode=0 Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.672441 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8ph5" event={"ID":"84dca2c7-0ed9-4824-8431-88a26864b81c","Type":"ContainerDied","Data":"3455a98e40d89562e40e6e94ba15d161b0158d0cd6a9fbe929a8cc5a96dcdd33"} Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.672577 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8ph5" event={"ID":"84dca2c7-0ed9-4824-8431-88a26864b81c","Type":"ContainerDied","Data":"e9a9590641b499658f791976478da87fd8435b4caee11949430782546ca44c91"} Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.672605 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a9590641b499658f791976478da87fd8435b4caee11949430782546ca44c91" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.694568 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.706358 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.714113 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.719136 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.726019 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.733064 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.827399 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-catalog-content\") pod \"84dca2c7-0ed9-4824-8431-88a26864b81c\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.827976 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-utilities\") pod \"84dca2c7-0ed9-4824-8431-88a26864b81c\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828008 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-catalog-content\") pod \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828053 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-catalog-content\") pod \"8c778c82-e9ab-40e7-b084-07195587a975\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828104 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q966\" (UniqueName: \"kubernetes.io/projected/9cbfc029-9e57-4f39-942c-47f3c8f866b2-kube-api-access-5q966\") pod \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828156 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-utilities\") pod \"3504cb19-2058-4787-b5fb-c0b64aa2322f\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828182 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhg4t\" (UniqueName: \"kubernetes.io/projected/84dca2c7-0ed9-4824-8431-88a26864b81c-kube-api-access-zhg4t\") pod \"84dca2c7-0ed9-4824-8431-88a26864b81c\" (UID: \"84dca2c7-0ed9-4824-8431-88a26864b81c\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828205 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-utilities\") pod \"8c778c82-e9ab-40e7-b084-07195587a975\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828226 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq78t\" (UniqueName: \"kubernetes.io/projected/3504cb19-2058-4787-b5fb-c0b64aa2322f-kube-api-access-hq78t\") pod \"3504cb19-2058-4787-b5fb-c0b64aa2322f\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828257 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-operator-metrics\") pod \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828314 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm2n2\" (UniqueName: \"kubernetes.io/projected/8c778c82-e9ab-40e7-b084-07195587a975-kube-api-access-qm2n2\") pod \"8c778c82-e9ab-40e7-b084-07195587a975\" (UID: \"8c778c82-e9ab-40e7-b084-07195587a975\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828343 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-utilities\") pod \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\" (UID: \"9cbfc029-9e57-4f39-942c-47f3c8f866b2\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828384 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7p96\" (UniqueName: \"kubernetes.io/projected/e7e8415f-6c51-452d-b0dc-13edd5475f6d-kube-api-access-l7p96\") pod \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828409 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-trusted-ca\") pod \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\" (UID: \"e7e8415f-6c51-452d-b0dc-13edd5475f6d\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.828457 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-catalog-content\") pod \"3504cb19-2058-4787-b5fb-c0b64aa2322f\" (UID: \"3504cb19-2058-4787-b5fb-c0b64aa2322f\") " Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.831679 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-utilities" (OuterVolumeSpecName: "utilities") pod "8c778c82-e9ab-40e7-b084-07195587a975" (UID: "8c778c82-e9ab-40e7-b084-07195587a975"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.832580 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-utilities" (OuterVolumeSpecName: "utilities") pod "9cbfc029-9e57-4f39-942c-47f3c8f866b2" (UID: "9cbfc029-9e57-4f39-942c-47f3c8f866b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.833225 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-utilities" (OuterVolumeSpecName: "utilities") pod "3504cb19-2058-4787-b5fb-c0b64aa2322f" (UID: "3504cb19-2058-4787-b5fb-c0b64aa2322f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.838650 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-utilities" (OuterVolumeSpecName: "utilities") pod "84dca2c7-0ed9-4824-8431-88a26864b81c" (UID: "84dca2c7-0ed9-4824-8431-88a26864b81c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.841296 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e7e8415f-6c51-452d-b0dc-13edd5475f6d" (UID: "e7e8415f-6c51-452d-b0dc-13edd5475f6d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.842107 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3504cb19-2058-4787-b5fb-c0b64aa2322f-kube-api-access-hq78t" (OuterVolumeSpecName: "kube-api-access-hq78t") pod "3504cb19-2058-4787-b5fb-c0b64aa2322f" (UID: "3504cb19-2058-4787-b5fb-c0b64aa2322f"). InnerVolumeSpecName "kube-api-access-hq78t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.843292 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e8415f-6c51-452d-b0dc-13edd5475f6d-kube-api-access-l7p96" (OuterVolumeSpecName: "kube-api-access-l7p96") pod "e7e8415f-6c51-452d-b0dc-13edd5475f6d" (UID: "e7e8415f-6c51-452d-b0dc-13edd5475f6d"). InnerVolumeSpecName "kube-api-access-l7p96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.845582 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c778c82-e9ab-40e7-b084-07195587a975-kube-api-access-qm2n2" (OuterVolumeSpecName: "kube-api-access-qm2n2") pod "8c778c82-e9ab-40e7-b084-07195587a975" (UID: "8c778c82-e9ab-40e7-b084-07195587a975"). InnerVolumeSpecName "kube-api-access-qm2n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.851975 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e7e8415f-6c51-452d-b0dc-13edd5475f6d" (UID: "e7e8415f-6c51-452d-b0dc-13edd5475f6d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.854082 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cbfc029-9e57-4f39-942c-47f3c8f866b2-kube-api-access-5q966" (OuterVolumeSpecName: "kube-api-access-5q966") pod "9cbfc029-9e57-4f39-942c-47f3c8f866b2" (UID: "9cbfc029-9e57-4f39-942c-47f3c8f866b2"). InnerVolumeSpecName "kube-api-access-5q966". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.856300 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84dca2c7-0ed9-4824-8431-88a26864b81c-kube-api-access-zhg4t" (OuterVolumeSpecName: "kube-api-access-zhg4t") pod "84dca2c7-0ed9-4824-8431-88a26864b81c" (UID: "84dca2c7-0ed9-4824-8431-88a26864b81c"). InnerVolumeSpecName "kube-api-access-zhg4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.858657 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cbfc029-9e57-4f39-942c-47f3c8f866b2" (UID: "9cbfc029-9e57-4f39-942c-47f3c8f866b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.906365 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c778c82-e9ab-40e7-b084-07195587a975" (UID: "8c778c82-e9ab-40e7-b084-07195587a975"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.911993 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84dca2c7-0ed9-4824-8431-88a26864b81c" (UID: "84dca2c7-0ed9-4824-8431-88a26864b81c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930337 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q966\" (UniqueName: \"kubernetes.io/projected/9cbfc029-9e57-4f39-942c-47f3c8f866b2-kube-api-access-5q966\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930367 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhg4t\" (UniqueName: \"kubernetes.io/projected/84dca2c7-0ed9-4824-8431-88a26864b81c-kube-api-access-zhg4t\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930379 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930389 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930398 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq78t\" (UniqueName: \"kubernetes.io/projected/3504cb19-2058-4787-b5fb-c0b64aa2322f-kube-api-access-hq78t\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930407 5007 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930417 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm2n2\" (UniqueName: \"kubernetes.io/projected/8c778c82-e9ab-40e7-b084-07195587a975-kube-api-access-qm2n2\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930443 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930458 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7p96\" (UniqueName: \"kubernetes.io/projected/e7e8415f-6c51-452d-b0dc-13edd5475f6d-kube-api-access-l7p96\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930474 5007 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7e8415f-6c51-452d-b0dc-13edd5475f6d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930483 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930492 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dca2c7-0ed9-4824-8431-88a26864b81c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930503 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbfc029-9e57-4f39-942c-47f3c8f866b2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.930511 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c778c82-e9ab-40e7-b084-07195587a975-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:29 crc kubenswrapper[5007]: I0218 19:43:29.977215 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3504cb19-2058-4787-b5fb-c0b64aa2322f" (UID: "3504cb19-2058-4787-b5fb-c0b64aa2322f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.031699 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3504cb19-2058-4787-b5fb-c0b64aa2322f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.138040 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jcq7x"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.686099 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" event={"ID":"e7e8415f-6c51-452d-b0dc-13edd5475f6d","Type":"ContainerDied","Data":"fe0e64b078f547a81a29c455217902be41377877fe7ad93218ed2db497e6c2e3"} Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.687641 5007 scope.go:117] "RemoveContainer" containerID="8888ea3b6a01fc2fc18fe155d9a37858fd327ea972185945b11ff020292178a5" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.687826 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" event={"ID":"8dce8b0a-01a5-4ed3-a925-8050098d4ff5","Type":"ContainerStarted","Data":"0fd17329ebbf47250e4c359365a21616d915f017e3368bd57fa59aeb631ada92"} Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.687874 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" event={"ID":"8dce8b0a-01a5-4ed3-a925-8050098d4ff5","Type":"ContainerStarted","Data":"e5e0398b0483eb4509f742351d281b46716fa2c4a7df80840b9603bcffbd7fe7"} Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.687956 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tfxfg" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.688443 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.689092 5007 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jcq7x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.689135 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" podUID="8dce8b0a-01a5-4ed3-a925-8050098d4ff5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.693726 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdh7g" event={"ID":"3504cb19-2058-4787-b5fb-c0b64aa2322f","Type":"ContainerDied","Data":"e92c338bbe119dae19a8137e80d73cdf7bdc269a580658990119b306688a1883"} Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.693750 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdh7g" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.703752 5007 scope.go:117] "RemoveContainer" containerID="91c1c965f121023a9a7594c8932fc8de18b29bebfd508d87d155df90b05a7be4" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.707128 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4wz9" event={"ID":"8c778c82-e9ab-40e7-b084-07195587a975","Type":"ContainerDied","Data":"792e662a999f142236f4f32cb21b048854025d0e45424e77cd79051362d844d4"} Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.707160 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4wz9" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.710802 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vddlz" event={"ID":"9cbfc029-9e57-4f39-942c-47f3c8f866b2","Type":"ContainerDied","Data":"5f33155305a27da3c64cd453c185041d05d25fc2cf99848a4c9797a162857bf5"} Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.710833 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8ph5" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.711327 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vddlz" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.713979 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" podStartSLOduration=1.713950009 podStartE2EDuration="1.713950009s" podCreationTimestamp="2026-02-18 19:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:43:30.706191177 +0000 UTC m=+295.488310761" watchObservedRunningTime="2026-02-18 19:43:30.713950009 +0000 UTC m=+295.496069553" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.721485 5007 scope.go:117] "RemoveContainer" containerID="6835ee4472ba0edbe69b8a44430dbfe2d752d97e0454970e3839e40b38408f33" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.740737 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfxfg"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.750266 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tfxfg"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.764692 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vddlz"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.769181 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vddlz"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.769584 5007 scope.go:117] "RemoveContainer" containerID="df67e6dd85644c1e09be67cb107a2ad9607d4f80ce3d4b3b629390e37f9b3dd9" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.778968 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4wz9"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.787546 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k4wz9"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.793874 5007 scope.go:117] "RemoveContainer" containerID="8495bad923e38e3a7c559fe87ea588828a334bae0911d3fae3c5b827f6e1cb45" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.795783 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z8ph5"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.805792 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z8ph5"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.811564 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdh7g"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.816036 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mdh7g"] Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.816987 5007 scope.go:117] "RemoveContainer" containerID="52fdd9ecc8c306909f0bd4c366812eab2f3bd4b70dbc1b351fc40a582428870f" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.837019 5007 scope.go:117] "RemoveContainer" containerID="6a1bf21b9248245a518d8f094ed7cbd58efa2126e86b655aef98a3f825b4eb7c" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.857366 5007 scope.go:117] "RemoveContainer" containerID="bdca2c74852b3c989be6c7ee84b8ba2c4af88a9c34506c01e73856f7e0bb5db8" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.883141 5007 scope.go:117] "RemoveContainer" containerID="8f16989626bc6873aa48c6ad2ac8ae6b9b4fb11823990438e36061ce46d9543a" Feb 18 19:43:30 crc kubenswrapper[5007]: I0218 19:43:30.908529 5007 scope.go:117] "RemoveContainer" containerID="4c0c5753915e08973ae03f850fed76b02158693c4b710414698bd5a97c0384ee" Feb 18 19:43:31 crc kubenswrapper[5007]: I0218 19:43:31.726486 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jcq7x" Feb 18 19:43:31 crc kubenswrapper[5007]: I0218 19:43:31.918916 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" path="/var/lib/kubelet/pods/3504cb19-2058-4787-b5fb-c0b64aa2322f/volumes" Feb 18 19:43:31 crc kubenswrapper[5007]: I0218 19:43:31.920308 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" path="/var/lib/kubelet/pods/84dca2c7-0ed9-4824-8431-88a26864b81c/volumes" Feb 18 19:43:31 crc kubenswrapper[5007]: I0218 19:43:31.922009 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c778c82-e9ab-40e7-b084-07195587a975" path="/var/lib/kubelet/pods/8c778c82-e9ab-40e7-b084-07195587a975/volumes" Feb 18 19:43:31 crc kubenswrapper[5007]: I0218 19:43:31.924156 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" path="/var/lib/kubelet/pods/9cbfc029-9e57-4f39-942c-47f3c8f866b2/volumes" Feb 18 19:43:31 crc kubenswrapper[5007]: I0218 19:43:31.925627 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" path="/var/lib/kubelet/pods/e7e8415f-6c51-452d-b0dc-13edd5475f6d/volumes" Feb 18 19:43:35 crc kubenswrapper[5007]: I0218 19:43:35.736145 5007 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 19:43:36 crc kubenswrapper[5007]: I0218 19:43:36.083284 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:43:46 crc kubenswrapper[5007]: I0218 19:43:46.929771 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.339521 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gdbm9"] Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.340601 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" podUID="02fc3cf7-03bd-4940-956b-51c16a41c310" containerName="controller-manager" containerID="cri-o://1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0" gracePeriod=30 Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.496490 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7"] Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.496746 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" podUID="41d2c625-82ed-4b63-bd0b-9acf5199e4f6" containerName="route-controller-manager" containerID="cri-o://4d763ccb2bdf90ced8a32d2ae1ae5c8e3090d26b92f0dce9d2269ea3f69bceee" gracePeriod=30 Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.778239 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.819188 5007 generic.go:334] "Generic (PLEG): container finished" podID="41d2c625-82ed-4b63-bd0b-9acf5199e4f6" containerID="4d763ccb2bdf90ced8a32d2ae1ae5c8e3090d26b92f0dce9d2269ea3f69bceee" exitCode=0 Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.819257 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" event={"ID":"41d2c625-82ed-4b63-bd0b-9acf5199e4f6","Type":"ContainerDied","Data":"4d763ccb2bdf90ced8a32d2ae1ae5c8e3090d26b92f0dce9d2269ea3f69bceee"} Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.820283 5007 generic.go:334] "Generic (PLEG): container finished" podID="02fc3cf7-03bd-4940-956b-51c16a41c310" containerID="1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0" exitCode=0 Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.820317 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" event={"ID":"02fc3cf7-03bd-4940-956b-51c16a41c310","Type":"ContainerDied","Data":"1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0"} Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.820345 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" event={"ID":"02fc3cf7-03bd-4940-956b-51c16a41c310","Type":"ContainerDied","Data":"3c167e66a7e7499c18a767d416bdac838133a14f1c303d21646d6f6a2d5770b7"} Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.820362 5007 scope.go:117] "RemoveContainer" containerID="1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.820483 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gdbm9" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.843687 5007 scope.go:117] "RemoveContainer" containerID="1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0" Feb 18 19:43:47 crc kubenswrapper[5007]: E0218 19:43:47.844828 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0\": container with ID starting with 1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0 not found: ID does not exist" containerID="1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.844883 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0"} err="failed to get container status \"1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0\": rpc error: code = NotFound desc = could not find container \"1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0\": container with ID starting with 1d2461b85cbf04aefa06c92b194f8dd30766e07b420aaac4772d45437ef7afe0 not found: ID does not exist" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.861272 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.918150 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-config\") pod \"02fc3cf7-03bd-4940-956b-51c16a41c310\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.918292 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-proxy-ca-bundles\") pod \"02fc3cf7-03bd-4940-956b-51c16a41c310\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.918346 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9l4n\" (UniqueName: \"kubernetes.io/projected/02fc3cf7-03bd-4940-956b-51c16a41c310-kube-api-access-m9l4n\") pod \"02fc3cf7-03bd-4940-956b-51c16a41c310\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.918477 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-client-ca\") pod \"02fc3cf7-03bd-4940-956b-51c16a41c310\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.918519 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02fc3cf7-03bd-4940-956b-51c16a41c310-serving-cert\") pod \"02fc3cf7-03bd-4940-956b-51c16a41c310\" (UID: \"02fc3cf7-03bd-4940-956b-51c16a41c310\") " Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.919210 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02fc3cf7-03bd-4940-956b-51c16a41c310" (UID: "02fc3cf7-03bd-4940-956b-51c16a41c310"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.920238 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-client-ca" (OuterVolumeSpecName: "client-ca") pod "02fc3cf7-03bd-4940-956b-51c16a41c310" (UID: "02fc3cf7-03bd-4940-956b-51c16a41c310"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.920286 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-config" (OuterVolumeSpecName: "config") pod "02fc3cf7-03bd-4940-956b-51c16a41c310" (UID: "02fc3cf7-03bd-4940-956b-51c16a41c310"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.925511 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02fc3cf7-03bd-4940-956b-51c16a41c310-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02fc3cf7-03bd-4940-956b-51c16a41c310" (UID: "02fc3cf7-03bd-4940-956b-51c16a41c310"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:47 crc kubenswrapper[5007]: I0218 19:43:47.929955 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fc3cf7-03bd-4940-956b-51c16a41c310-kube-api-access-m9l4n" (OuterVolumeSpecName: "kube-api-access-m9l4n") pod "02fc3cf7-03bd-4940-956b-51c16a41c310" (UID: "02fc3cf7-03bd-4940-956b-51c16a41c310"). InnerVolumeSpecName "kube-api-access-m9l4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.019890 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-serving-cert\") pod \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.020002 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-config\") pod \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.020034 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8qb2\" (UniqueName: \"kubernetes.io/projected/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-kube-api-access-c8qb2\") pod \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.020065 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-client-ca\") pod \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\" (UID: \"41d2c625-82ed-4b63-bd0b-9acf5199e4f6\") " Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.020597 5007 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.020618 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9l4n\" (UniqueName: \"kubernetes.io/projected/02fc3cf7-03bd-4940-956b-51c16a41c310-kube-api-access-m9l4n\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.020632 5007 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.020644 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02fc3cf7-03bd-4940-956b-51c16a41c310-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.020660 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fc3cf7-03bd-4940-956b-51c16a41c310-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.021497 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-config" (OuterVolumeSpecName: "config") pod "41d2c625-82ed-4b63-bd0b-9acf5199e4f6" (UID: "41d2c625-82ed-4b63-bd0b-9acf5199e4f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.021648 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "41d2c625-82ed-4b63-bd0b-9acf5199e4f6" (UID: "41d2c625-82ed-4b63-bd0b-9acf5199e4f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.024664 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41d2c625-82ed-4b63-bd0b-9acf5199e4f6" (UID: "41d2c625-82ed-4b63-bd0b-9acf5199e4f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.025250 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-kube-api-access-c8qb2" (OuterVolumeSpecName: "kube-api-access-c8qb2") pod "41d2c625-82ed-4b63-bd0b-9acf5199e4f6" (UID: "41d2c625-82ed-4b63-bd0b-9acf5199e4f6"). InnerVolumeSpecName "kube-api-access-c8qb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.122405 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.122447 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8qb2\" (UniqueName: \"kubernetes.io/projected/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-kube-api-access-c8qb2\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.122459 5007 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.122470 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d2c625-82ed-4b63-bd0b-9acf5199e4f6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.149211 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gdbm9"] Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.155863 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gdbm9"] Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.829309 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" event={"ID":"41d2c625-82ed-4b63-bd0b-9acf5199e4f6","Type":"ContainerDied","Data":"d2ec36fe0c06efa238816b19da993d22cd2f92bc953b3882cef1394d5c734ab0"} Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.829351 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.829370 5007 scope.go:117] "RemoveContainer" containerID="4d763ccb2bdf90ced8a32d2ae1ae5c8e3090d26b92f0dce9d2269ea3f69bceee" Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.856596 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7"] Feb 18 19:43:48 crc kubenswrapper[5007]: I0218 19:43:48.859293 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jlmb7"] Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.070821 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw"] Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071292 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerName="extract-content" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071311 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerName="extract-content" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071329 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071337 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071350 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d2c625-82ed-4b63-bd0b-9acf5199e4f6" containerName="route-controller-manager" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071358 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d2c625-82ed-4b63-bd0b-9acf5199e4f6" containerName="route-controller-manager" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071369 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerName="extract-utilities" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071377 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerName="extract-utilities" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071390 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fc3cf7-03bd-4940-956b-51c16a41c310" containerName="controller-manager" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071399 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fc3cf7-03bd-4940-956b-51c16a41c310" containerName="controller-manager" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071410 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerName="extract-utilities" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071419 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerName="extract-utilities" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071448 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071456 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071465 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" containerName="marketplace-operator" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071474 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" containerName="marketplace-operator" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071489 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071498 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071515 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="extract-utilities" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071524 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="extract-utilities" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071538 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c778c82-e9ab-40e7-b084-07195587a975" containerName="extract-utilities" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071549 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c778c82-e9ab-40e7-b084-07195587a975" containerName="extract-utilities" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071563 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c778c82-e9ab-40e7-b084-07195587a975" containerName="extract-content" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071571 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c778c82-e9ab-40e7-b084-07195587a975" containerName="extract-content" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071582 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c778c82-e9ab-40e7-b084-07195587a975" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071591 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c778c82-e9ab-40e7-b084-07195587a975" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071607 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerName="extract-content" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071616 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerName="extract-content" Feb 18 19:43:49 crc kubenswrapper[5007]: E0218 19:43:49.071626 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="extract-content" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071634 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="extract-content" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071744 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="84dca2c7-0ed9-4824-8431-88a26864b81c" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071758 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c778c82-e9ab-40e7-b084-07195587a975" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071770 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="3504cb19-2058-4787-b5fb-c0b64aa2322f" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071779 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e8415f-6c51-452d-b0dc-13edd5475f6d" containerName="marketplace-operator" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071790 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cbfc029-9e57-4f39-942c-47f3c8f866b2" containerName="registry-server" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071803 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d2c625-82ed-4b63-bd0b-9acf5199e4f6" containerName="route-controller-manager" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.071818 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="02fc3cf7-03bd-4940-956b-51c16a41c310" containerName="controller-manager" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.072298 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.074944 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-blghn"] Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.075045 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.075536 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.075538 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.075802 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.076200 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.076257 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.076389 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.080019 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.080104 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.080402 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.080592 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.081488 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.082552 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw"] Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.082580 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.092718 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-blghn"] Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.102372 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237214 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-config\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237274 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqcgb\" (UniqueName: \"kubernetes.io/projected/d5d50f9d-4285-4115-a7a1-14a8c29672af-kube-api-access-lqcgb\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237316 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-config\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237363 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-serving-cert\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237473 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-client-ca\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237530 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7w9x\" (UniqueName: \"kubernetes.io/projected/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-kube-api-access-c7w9x\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237573 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d50f9d-4285-4115-a7a1-14a8c29672af-serving-cert\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237608 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-client-ca\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.237657 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-proxy-ca-bundles\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.338847 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-client-ca\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.338923 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7w9x\" (UniqueName: \"kubernetes.io/projected/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-kube-api-access-c7w9x\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.338951 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d50f9d-4285-4115-a7a1-14a8c29672af-serving-cert\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.338969 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-client-ca\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.339000 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-proxy-ca-bundles\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.339032 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqcgb\" (UniqueName: \"kubernetes.io/projected/d5d50f9d-4285-4115-a7a1-14a8c29672af-kube-api-access-lqcgb\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.339051 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-config\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.339070 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-config\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.339095 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-serving-cert\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.340150 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-client-ca\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.340740 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-client-ca\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.341247 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-proxy-ca-bundles\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.341717 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-config\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.343379 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-config\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.344827 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d50f9d-4285-4115-a7a1-14a8c29672af-serving-cert\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.345665 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-serving-cert\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.357525 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqcgb\" (UniqueName: \"kubernetes.io/projected/d5d50f9d-4285-4115-a7a1-14a8c29672af-kube-api-access-lqcgb\") pod \"controller-manager-868cbf6d4b-blghn\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.357750 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7w9x\" (UniqueName: \"kubernetes.io/projected/bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85-kube-api-access-c7w9x\") pod \"route-controller-manager-6dbd89b5dd-fvprw\" (UID: \"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.398192 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.420689 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.842049 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw"] Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.849577 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-blghn"] Feb 18 19:43:49 crc kubenswrapper[5007]: W0218 19:43:49.851476 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb4fa70_7fcf_4c3b_a3ea_ef1a633eeb85.slice/crio-b04e97509be5d63d836a765cfd77355ec57d30f39bebb7fc7a91f936ddf3c726 WatchSource:0}: Error finding container b04e97509be5d63d836a765cfd77355ec57d30f39bebb7fc7a91f936ddf3c726: Status 404 returned error can't find the container with id b04e97509be5d63d836a765cfd77355ec57d30f39bebb7fc7a91f936ddf3c726 Feb 18 19:43:49 crc kubenswrapper[5007]: W0218 19:43:49.852190 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5d50f9d_4285_4115_a7a1_14a8c29672af.slice/crio-c5d74c0ca269fd1fbdacb4c0b9223c611597ba8c7386af6139445d32e9f8dba4 WatchSource:0}: Error finding container c5d74c0ca269fd1fbdacb4c0b9223c611597ba8c7386af6139445d32e9f8dba4: Status 404 returned error can't find the container with id c5d74c0ca269fd1fbdacb4c0b9223c611597ba8c7386af6139445d32e9f8dba4 Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.916675 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02fc3cf7-03bd-4940-956b-51c16a41c310" path="/var/lib/kubelet/pods/02fc3cf7-03bd-4940-956b-51c16a41c310/volumes" Feb 18 19:43:49 crc kubenswrapper[5007]: I0218 19:43:49.917585 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d2c625-82ed-4b63-bd0b-9acf5199e4f6" path="/var/lib/kubelet/pods/41d2c625-82ed-4b63-bd0b-9acf5199e4f6/volumes" Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.845184 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" event={"ID":"d5d50f9d-4285-4115-a7a1-14a8c29672af","Type":"ContainerStarted","Data":"c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c"} Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.845669 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" event={"ID":"d5d50f9d-4285-4115-a7a1-14a8c29672af","Type":"ContainerStarted","Data":"c5d74c0ca269fd1fbdacb4c0b9223c611597ba8c7386af6139445d32e9f8dba4"} Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.845698 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.848147 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" event={"ID":"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85","Type":"ContainerStarted","Data":"99dc75e42badb7bea2b912aca3f246febc89e9bb814a9ebe619b6f70af3f7ce3"} Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.848205 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" event={"ID":"bfb4fa70-7fcf-4c3b-a3ea-ef1a633eeb85","Type":"ContainerStarted","Data":"b04e97509be5d63d836a765cfd77355ec57d30f39bebb7fc7a91f936ddf3c726"} Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.848464 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.851703 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.857740 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.886880 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" podStartSLOduration=3.886859706 podStartE2EDuration="3.886859706s" podCreationTimestamp="2026-02-18 19:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:43:50.883418752 +0000 UTC m=+315.665538296" watchObservedRunningTime="2026-02-18 19:43:50.886859706 +0000 UTC m=+315.668979230" Feb 18 19:43:50 crc kubenswrapper[5007]: I0218 19:43:50.902467 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-fvprw" podStartSLOduration=3.902447072 podStartE2EDuration="3.902447072s" podCreationTimestamp="2026-02-18 19:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:43:50.90166384 +0000 UTC m=+315.683783374" watchObservedRunningTime="2026-02-18 19:43:50.902447072 +0000 UTC m=+315.684566596" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.645534 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pn9lz"] Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.647169 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.649979 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.658179 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn9lz"] Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.669913 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52fb086c-494b-4ac3-a878-1a72e6e0063b-catalog-content\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.669990 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52fb086c-494b-4ac3-a878-1a72e6e0063b-utilities\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.670024 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppgcs\" (UniqueName: \"kubernetes.io/projected/52fb086c-494b-4ac3-a878-1a72e6e0063b-kube-api-access-ppgcs\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.770666 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52fb086c-494b-4ac3-a878-1a72e6e0063b-utilities\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.770915 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppgcs\" (UniqueName: \"kubernetes.io/projected/52fb086c-494b-4ac3-a878-1a72e6e0063b-kube-api-access-ppgcs\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.771147 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52fb086c-494b-4ac3-a878-1a72e6e0063b-utilities\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.771193 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52fb086c-494b-4ac3-a878-1a72e6e0063b-catalog-content\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.771463 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52fb086c-494b-4ac3-a878-1a72e6e0063b-catalog-content\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.790304 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppgcs\" (UniqueName: \"kubernetes.io/projected/52fb086c-494b-4ac3-a878-1a72e6e0063b-kube-api-access-ppgcs\") pod \"community-operators-pn9lz\" (UID: \"52fb086c-494b-4ac3-a878-1a72e6e0063b\") " pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.841165 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vmpzn"] Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.842452 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.844753 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.853774 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vmpzn"] Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.872369 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-catalog-content\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.872415 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rc2w\" (UniqueName: \"kubernetes.io/projected/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-kube-api-access-6rc2w\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.872516 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-utilities\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.973918 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-utilities\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.974279 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-catalog-content\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.974356 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rc2w\" (UniqueName: \"kubernetes.io/projected/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-kube-api-access-6rc2w\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.974685 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-catalog-content\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.975090 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-utilities\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.985914 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:05 crc kubenswrapper[5007]: I0218 19:44:05.994651 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rc2w\" (UniqueName: \"kubernetes.io/projected/b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6-kube-api-access-6rc2w\") pod \"certified-operators-vmpzn\" (UID: \"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6\") " pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.173756 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.489404 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pn9lz"] Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.628292 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vmpzn"] Feb 18 19:44:06 crc kubenswrapper[5007]: W0218 19:44:06.636567 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6bbfa1d_8659_470f_9b1b_806d8e6f1ca6.slice/crio-c26ed65e2824200df3ce54544ba1f67196964cb73f9fb998086cb3fa36cbb554 WatchSource:0}: Error finding container c26ed65e2824200df3ce54544ba1f67196964cb73f9fb998086cb3fa36cbb554: Status 404 returned error can't find the container with id c26ed65e2824200df3ce54544ba1f67196964cb73f9fb998086cb3fa36cbb554 Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.947651 5007 generic.go:334] "Generic (PLEG): container finished" podID="52fb086c-494b-4ac3-a878-1a72e6e0063b" containerID="b4b1dfe1f66accc92731c2a4961bba36b63009abbbca23b2bd886832e312b295" exitCode=0 Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.947735 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9lz" event={"ID":"52fb086c-494b-4ac3-a878-1a72e6e0063b","Type":"ContainerDied","Data":"b4b1dfe1f66accc92731c2a4961bba36b63009abbbca23b2bd886832e312b295"} Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.947763 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9lz" event={"ID":"52fb086c-494b-4ac3-a878-1a72e6e0063b","Type":"ContainerStarted","Data":"70bdb3dc4b4c369c61c6673354c440b4480140de3ee388bd2509db370e76f33e"} Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.949470 5007 generic.go:334] "Generic (PLEG): container finished" podID="b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6" containerID="fa3b9219e2097748abbfdab9632f3c4c80a880ffe5a7e6d7852716404faad8a3" exitCode=0 Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.949498 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmpzn" event={"ID":"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6","Type":"ContainerDied","Data":"fa3b9219e2097748abbfdab9632f3c4c80a880ffe5a7e6d7852716404faad8a3"} Feb 18 19:44:06 crc kubenswrapper[5007]: I0218 19:44:06.949516 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmpzn" event={"ID":"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6","Type":"ContainerStarted","Data":"c26ed65e2824200df3ce54544ba1f67196964cb73f9fb998086cb3fa36cbb554"} Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.043500 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f7kxn"] Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.044821 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.047231 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.055025 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7kxn"] Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.123362 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9f150e-b129-4006-a436-b08898fcfcdc-catalog-content\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.123713 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gj8v\" (UniqueName: \"kubernetes.io/projected/1e9f150e-b129-4006-a436-b08898fcfcdc-kube-api-access-4gj8v\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.123752 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9f150e-b129-4006-a436-b08898fcfcdc-utilities\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.225532 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9f150e-b129-4006-a436-b08898fcfcdc-utilities\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.225632 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9f150e-b129-4006-a436-b08898fcfcdc-catalog-content\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.225678 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gj8v\" (UniqueName: \"kubernetes.io/projected/1e9f150e-b129-4006-a436-b08898fcfcdc-kube-api-access-4gj8v\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.226286 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e9f150e-b129-4006-a436-b08898fcfcdc-utilities\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.226297 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e9f150e-b129-4006-a436-b08898fcfcdc-catalog-content\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.242977 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gj8v\" (UniqueName: \"kubernetes.io/projected/1e9f150e-b129-4006-a436-b08898fcfcdc-kube-api-access-4gj8v\") pod \"redhat-marketplace-f7kxn\" (UID: \"1e9f150e-b129-4006-a436-b08898fcfcdc\") " pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.362228 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.478092 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k96cg"] Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.479258 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.481642 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.502583 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k96cg"] Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.534939 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-utilities\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.535016 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-catalog-content\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.535073 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmh4q\" (UniqueName: \"kubernetes.io/projected/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-kube-api-access-fmh4q\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.636924 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-utilities\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.637380 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-catalog-content\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.637423 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmh4q\" (UniqueName: \"kubernetes.io/projected/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-kube-api-access-fmh4q\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.637748 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-catalog-content\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.640121 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-utilities\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.655417 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmh4q\" (UniqueName: \"kubernetes.io/projected/9e5a53d0-fe04-4c5e-a20d-230f047f42f3-kube-api-access-fmh4q\") pod \"redhat-operators-k96cg\" (UID: \"9e5a53d0-fe04-4c5e-a20d-230f047f42f3\") " pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.827544 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.902934 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7kxn"] Feb 18 19:44:08 crc kubenswrapper[5007]: W0218 19:44:08.924458 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9f150e_b129_4006_a436_b08898fcfcdc.slice/crio-527254d10d08d53b69f2dbb6aed6ab6ce6c6fb2bdee7b00cd6f1cdce1944d2aa WatchSource:0}: Error finding container 527254d10d08d53b69f2dbb6aed6ab6ce6c6fb2bdee7b00cd6f1cdce1944d2aa: Status 404 returned error can't find the container with id 527254d10d08d53b69f2dbb6aed6ab6ce6c6fb2bdee7b00cd6f1cdce1944d2aa Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.964263 5007 generic.go:334] "Generic (PLEG): container finished" podID="52fb086c-494b-4ac3-a878-1a72e6e0063b" containerID="6c68ee95698f46284ae7c4f957fa6f5f92e79abab7e1c7cfaad4ddc3a41fdc6e" exitCode=0 Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.964353 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9lz" event={"ID":"52fb086c-494b-4ac3-a878-1a72e6e0063b","Type":"ContainerDied","Data":"6c68ee95698f46284ae7c4f957fa6f5f92e79abab7e1c7cfaad4ddc3a41fdc6e"} Feb 18 19:44:08 crc kubenswrapper[5007]: I0218 19:44:08.967527 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7kxn" event={"ID":"1e9f150e-b129-4006-a436-b08898fcfcdc","Type":"ContainerStarted","Data":"527254d10d08d53b69f2dbb6aed6ab6ce6c6fb2bdee7b00cd6f1cdce1944d2aa"} Feb 18 19:44:09 crc kubenswrapper[5007]: E0218 19:44:09.157117 5007 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9f150e_b129_4006_a436_b08898fcfcdc.slice/crio-07e7728021e7d4da34e3159cfd42f4f2844ed0adb078fa95d1dbf44980f7d569.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.241568 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k96cg"] Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.975903 5007 generic.go:334] "Generic (PLEG): container finished" podID="b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6" containerID="18d2bfa789f4601c24ad286309563cd2c2a0b3a8c14cc8eed3c637ebff664315" exitCode=0 Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.976016 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmpzn" event={"ID":"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6","Type":"ContainerDied","Data":"18d2bfa789f4601c24ad286309563cd2c2a0b3a8c14cc8eed3c637ebff664315"} Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.978908 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pn9lz" event={"ID":"52fb086c-494b-4ac3-a878-1a72e6e0063b","Type":"ContainerStarted","Data":"3a20484c317239fcc3e294cc701c4d396aebfb3f12ddf73312f53b4a1afc4412"} Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.980846 5007 generic.go:334] "Generic (PLEG): container finished" podID="9e5a53d0-fe04-4c5e-a20d-230f047f42f3" containerID="5cdf34d2d575fc7b6944dcd24c534dc5b60ea03d4540640d0536b88c92ca2748" exitCode=0 Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.980920 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k96cg" event={"ID":"9e5a53d0-fe04-4c5e-a20d-230f047f42f3","Type":"ContainerDied","Data":"5cdf34d2d575fc7b6944dcd24c534dc5b60ea03d4540640d0536b88c92ca2748"} Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.980961 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k96cg" event={"ID":"9e5a53d0-fe04-4c5e-a20d-230f047f42f3","Type":"ContainerStarted","Data":"1433154ff7033097cae6f56d5608b2a0582d49b37d56aec0193a20d095891846"} Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.982982 5007 generic.go:334] "Generic (PLEG): container finished" podID="1e9f150e-b129-4006-a436-b08898fcfcdc" containerID="07e7728021e7d4da34e3159cfd42f4f2844ed0adb078fa95d1dbf44980f7d569" exitCode=0 Feb 18 19:44:09 crc kubenswrapper[5007]: I0218 19:44:09.983026 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7kxn" event={"ID":"1e9f150e-b129-4006-a436-b08898fcfcdc","Type":"ContainerDied","Data":"07e7728021e7d4da34e3159cfd42f4f2844ed0adb078fa95d1dbf44980f7d569"} Feb 18 19:44:10 crc kubenswrapper[5007]: I0218 19:44:10.035689 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pn9lz" podStartSLOduration=2.574165571 podStartE2EDuration="5.035669074s" podCreationTimestamp="2026-02-18 19:44:05 +0000 UTC" firstStartedPulling="2026-02-18 19:44:06.94890728 +0000 UTC m=+331.731026804" lastFinishedPulling="2026-02-18 19:44:09.410410783 +0000 UTC m=+334.192530307" observedRunningTime="2026-02-18 19:44:10.032195358 +0000 UTC m=+334.814314902" watchObservedRunningTime="2026-02-18 19:44:10.035669074 +0000 UTC m=+334.817788598" Feb 18 19:44:10 crc kubenswrapper[5007]: I0218 19:44:10.990641 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmpzn" event={"ID":"b6bbfa1d-8659-470f-9b1b-806d8e6f1ca6","Type":"ContainerStarted","Data":"a67bdf5e60e47be443f1d445e40b0ec22d08de260c7983bd89609d353bdd94ff"} Feb 18 19:44:11 crc kubenswrapper[5007]: I0218 19:44:11.012642 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vmpzn" podStartSLOduration=2.334879992 podStartE2EDuration="6.012625974s" podCreationTimestamp="2026-02-18 19:44:05 +0000 UTC" firstStartedPulling="2026-02-18 19:44:06.950464993 +0000 UTC m=+331.732584517" lastFinishedPulling="2026-02-18 19:44:10.628210975 +0000 UTC m=+335.410330499" observedRunningTime="2026-02-18 19:44:11.007098052 +0000 UTC m=+335.789217576" watchObservedRunningTime="2026-02-18 19:44:11.012625974 +0000 UTC m=+335.794745498" Feb 18 19:44:11 crc kubenswrapper[5007]: I0218 19:44:11.999458 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k96cg" event={"ID":"9e5a53d0-fe04-4c5e-a20d-230f047f42f3","Type":"ContainerStarted","Data":"9d7563c6411e25daec7836bb61d6532a261b86703c1f1bb3b55f6cce610e0047"} Feb 18 19:44:12 crc kubenswrapper[5007]: I0218 19:44:12.002451 5007 generic.go:334] "Generic (PLEG): container finished" podID="1e9f150e-b129-4006-a436-b08898fcfcdc" containerID="3ce2c9a1325449f46357c4ee2f52d6a241f35876fe2f2dc9e4e444cc60bb1635" exitCode=0 Feb 18 19:44:12 crc kubenswrapper[5007]: I0218 19:44:12.002551 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7kxn" event={"ID":"1e9f150e-b129-4006-a436-b08898fcfcdc","Type":"ContainerDied","Data":"3ce2c9a1325449f46357c4ee2f52d6a241f35876fe2f2dc9e4e444cc60bb1635"} Feb 18 19:44:13 crc kubenswrapper[5007]: I0218 19:44:13.010850 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7kxn" event={"ID":"1e9f150e-b129-4006-a436-b08898fcfcdc","Type":"ContainerStarted","Data":"cc3bf96aeaae73ba4f5814344af4dfeb69dd9667bd0982a0eb6e6026263b794a"} Feb 18 19:44:13 crc kubenswrapper[5007]: I0218 19:44:13.013131 5007 generic.go:334] "Generic (PLEG): container finished" podID="9e5a53d0-fe04-4c5e-a20d-230f047f42f3" containerID="9d7563c6411e25daec7836bb61d6532a261b86703c1f1bb3b55f6cce610e0047" exitCode=0 Feb 18 19:44:13 crc kubenswrapper[5007]: I0218 19:44:13.013272 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k96cg" event={"ID":"9e5a53d0-fe04-4c5e-a20d-230f047f42f3","Type":"ContainerDied","Data":"9d7563c6411e25daec7836bb61d6532a261b86703c1f1bb3b55f6cce610e0047"} Feb 18 19:44:13 crc kubenswrapper[5007]: I0218 19:44:13.041141 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f7kxn" podStartSLOduration=2.647785513 podStartE2EDuration="5.041123522s" podCreationTimestamp="2026-02-18 19:44:08 +0000 UTC" firstStartedPulling="2026-02-18 19:44:09.984125677 +0000 UTC m=+334.766245201" lastFinishedPulling="2026-02-18 19:44:12.377463686 +0000 UTC m=+337.159583210" observedRunningTime="2026-02-18 19:44:13.036117965 +0000 UTC m=+337.818237499" watchObservedRunningTime="2026-02-18 19:44:13.041123522 +0000 UTC m=+337.823243046" Feb 18 19:44:14 crc kubenswrapper[5007]: I0218 19:44:14.023780 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k96cg" event={"ID":"9e5a53d0-fe04-4c5e-a20d-230f047f42f3","Type":"ContainerStarted","Data":"f91078d8b1aa96031fa22c835ae0c156599de9b00f87e59d7d87d980932a3ee9"} Feb 18 19:44:14 crc kubenswrapper[5007]: I0218 19:44:14.041801 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k96cg" podStartSLOduration=2.577222944 podStartE2EDuration="6.041782884s" podCreationTimestamp="2026-02-18 19:44:08 +0000 UTC" firstStartedPulling="2026-02-18 19:44:09.982155653 +0000 UTC m=+334.764275187" lastFinishedPulling="2026-02-18 19:44:13.446715603 +0000 UTC m=+338.228835127" observedRunningTime="2026-02-18 19:44:14.039950043 +0000 UTC m=+338.822069587" watchObservedRunningTime="2026-02-18 19:44:14.041782884 +0000 UTC m=+338.823902408" Feb 18 19:44:15 crc kubenswrapper[5007]: I0218 19:44:15.663943 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:44:15 crc kubenswrapper[5007]: I0218 19:44:15.664017 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:44:15 crc kubenswrapper[5007]: I0218 19:44:15.987063 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:15 crc kubenswrapper[5007]: I0218 19:44:15.987394 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:16 crc kubenswrapper[5007]: I0218 19:44:16.045870 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:16 crc kubenswrapper[5007]: I0218 19:44:16.174506 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:16 crc kubenswrapper[5007]: I0218 19:44:16.174839 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:16 crc kubenswrapper[5007]: I0218 19:44:16.225671 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:16 crc kubenswrapper[5007]: I0218 19:44:16.888163 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dxdkz"] Feb 18 19:44:16 crc kubenswrapper[5007]: I0218 19:44:16.889461 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:16 crc kubenswrapper[5007]: I0218 19:44:16.910025 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dxdkz"] Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.067376 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-bound-sa-token\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.067458 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-674jv\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-kube-api-access-674jv\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.067485 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e44309e-adfb-4e58-8381-6f5d71edbe4e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.067530 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-registry-tls\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.067559 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.067578 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e44309e-adfb-4e58-8381-6f5d71edbe4e-trusted-ca\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.067602 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e44309e-adfb-4e58-8381-6f5d71edbe4e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.067627 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e44309e-adfb-4e58-8381-6f5d71edbe4e-registry-certificates\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.079626 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vmpzn" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.095248 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.096021 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pn9lz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.168732 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-registry-tls\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.168788 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e44309e-adfb-4e58-8381-6f5d71edbe4e-trusted-ca\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.168820 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e44309e-adfb-4e58-8381-6f5d71edbe4e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.168865 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e44309e-adfb-4e58-8381-6f5d71edbe4e-registry-certificates\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.168925 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-bound-sa-token\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.168963 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-674jv\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-kube-api-access-674jv\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.168987 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e44309e-adfb-4e58-8381-6f5d71edbe4e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.169334 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e44309e-adfb-4e58-8381-6f5d71edbe4e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.170828 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e44309e-adfb-4e58-8381-6f5d71edbe4e-registry-certificates\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.171923 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e44309e-adfb-4e58-8381-6f5d71edbe4e-trusted-ca\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.182203 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e44309e-adfb-4e58-8381-6f5d71edbe4e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.182623 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-registry-tls\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.192242 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-674jv\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-kube-api-access-674jv\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.194396 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e44309e-adfb-4e58-8381-6f5d71edbe4e-bound-sa-token\") pod \"image-registry-66df7c8f76-dxdkz\" (UID: \"2e44309e-adfb-4e58-8381-6f5d71edbe4e\") " pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.204107 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:17 crc kubenswrapper[5007]: I0218 19:44:17.588331 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dxdkz"] Feb 18 19:44:18 crc kubenswrapper[5007]: I0218 19:44:18.047218 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" event={"ID":"2e44309e-adfb-4e58-8381-6f5d71edbe4e","Type":"ContainerStarted","Data":"5538793c251c6812c952360a5f1f4bd2e237a9b28640c73b1f5b46b35ae3175f"} Feb 18 19:44:18 crc kubenswrapper[5007]: I0218 19:44:18.047572 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" event={"ID":"2e44309e-adfb-4e58-8381-6f5d71edbe4e","Type":"ContainerStarted","Data":"8609ca189cd4de22c84ccf65e0879be38e22de6349221de2b17c2369b07fa377"} Feb 18 19:44:18 crc kubenswrapper[5007]: I0218 19:44:18.073713 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" podStartSLOduration=2.073681803 podStartE2EDuration="2.073681803s" podCreationTimestamp="2026-02-18 19:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:44:18.0699412 +0000 UTC m=+342.852060734" watchObservedRunningTime="2026-02-18 19:44:18.073681803 +0000 UTC m=+342.855801347" Feb 18 19:44:18 crc kubenswrapper[5007]: I0218 19:44:18.362636 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:18 crc kubenswrapper[5007]: I0218 19:44:18.362926 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:18 crc kubenswrapper[5007]: I0218 19:44:18.405368 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:18 crc kubenswrapper[5007]: I0218 19:44:18.828404 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:18 crc kubenswrapper[5007]: I0218 19:44:18.828519 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:19 crc kubenswrapper[5007]: I0218 19:44:19.051970 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:19 crc kubenswrapper[5007]: I0218 19:44:19.091630 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f7kxn" Feb 18 19:44:19 crc kubenswrapper[5007]: I0218 19:44:19.873046 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k96cg" podUID="9e5a53d0-fe04-4c5e-a20d-230f047f42f3" containerName="registry-server" probeResult="failure" output=< Feb 18 19:44:19 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 19:44:19 crc kubenswrapper[5007]: > Feb 18 19:44:28 crc kubenswrapper[5007]: I0218 19:44:28.885754 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:28 crc kubenswrapper[5007]: I0218 19:44:28.927211 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k96cg" Feb 18 19:44:37 crc kubenswrapper[5007]: I0218 19:44:37.209905 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dxdkz" Feb 18 19:44:37 crc kubenswrapper[5007]: I0218 19:44:37.272832 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4vmwt"] Feb 18 19:44:45 crc kubenswrapper[5007]: I0218 19:44:45.664022 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:44:45 crc kubenswrapper[5007]: I0218 19:44:45.665120 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.363727 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-blghn"] Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.364481 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" podUID="d5d50f9d-4285-4115-a7a1-14a8c29672af" containerName="controller-manager" containerID="cri-o://c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c" gracePeriod=30 Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.804280 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.948669 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d50f9d-4285-4115-a7a1-14a8c29672af-serving-cert\") pod \"d5d50f9d-4285-4115-a7a1-14a8c29672af\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.948766 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-config\") pod \"d5d50f9d-4285-4115-a7a1-14a8c29672af\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.948813 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-client-ca\") pod \"d5d50f9d-4285-4115-a7a1-14a8c29672af\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.948843 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqcgb\" (UniqueName: \"kubernetes.io/projected/d5d50f9d-4285-4115-a7a1-14a8c29672af-kube-api-access-lqcgb\") pod \"d5d50f9d-4285-4115-a7a1-14a8c29672af\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.948880 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-proxy-ca-bundles\") pod \"d5d50f9d-4285-4115-a7a1-14a8c29672af\" (UID: \"d5d50f9d-4285-4115-a7a1-14a8c29672af\") " Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.950270 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5d50f9d-4285-4115-a7a1-14a8c29672af" (UID: "d5d50f9d-4285-4115-a7a1-14a8c29672af"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.950295 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d5d50f9d-4285-4115-a7a1-14a8c29672af" (UID: "d5d50f9d-4285-4115-a7a1-14a8c29672af"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.950449 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-config" (OuterVolumeSpecName: "config") pod "d5d50f9d-4285-4115-a7a1-14a8c29672af" (UID: "d5d50f9d-4285-4115-a7a1-14a8c29672af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.960951 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d50f9d-4285-4115-a7a1-14a8c29672af-kube-api-access-lqcgb" (OuterVolumeSpecName: "kube-api-access-lqcgb") pod "d5d50f9d-4285-4115-a7a1-14a8c29672af" (UID: "d5d50f9d-4285-4115-a7a1-14a8c29672af"). InnerVolumeSpecName "kube-api-access-lqcgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:44:47 crc kubenswrapper[5007]: I0218 19:44:47.962112 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d50f9d-4285-4115-a7a1-14a8c29672af-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5d50f9d-4285-4115-a7a1-14a8c29672af" (UID: "d5d50f9d-4285-4115-a7a1-14a8c29672af"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.050098 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqcgb\" (UniqueName: \"kubernetes.io/projected/d5d50f9d-4285-4115-a7a1-14a8c29672af-kube-api-access-lqcgb\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.050146 5007 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.050157 5007 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d50f9d-4285-4115-a7a1-14a8c29672af-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.050167 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.050177 5007 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5d50f9d-4285-4115-a7a1-14a8c29672af-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.233116 5007 generic.go:334] "Generic (PLEG): container finished" podID="d5d50f9d-4285-4115-a7a1-14a8c29672af" containerID="c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c" exitCode=0 Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.233205 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.233221 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" event={"ID":"d5d50f9d-4285-4115-a7a1-14a8c29672af","Type":"ContainerDied","Data":"c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c"} Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.234149 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868cbf6d4b-blghn" event={"ID":"d5d50f9d-4285-4115-a7a1-14a8c29672af","Type":"ContainerDied","Data":"c5d74c0ca269fd1fbdacb4c0b9223c611597ba8c7386af6139445d32e9f8dba4"} Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.234188 5007 scope.go:117] "RemoveContainer" containerID="c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.268865 5007 scope.go:117] "RemoveContainer" containerID="c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c" Feb 18 19:44:48 crc kubenswrapper[5007]: E0218 19:44:48.269851 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c\": container with ID starting with c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c not found: ID does not exist" containerID="c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.269959 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c"} err="failed to get container status \"c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c\": rpc error: code = NotFound desc = could not find container \"c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c\": container with ID starting with c9e5ecff7ee5b0b7b83f2af159f43fb05914e0f1c82e403533593945480be30c not found: ID does not exist" Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.282952 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-blghn"] Feb 18 19:44:48 crc kubenswrapper[5007]: I0218 19:44:48.290068 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-blghn"] Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.137269 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8555cdd665-qhsxg"] Feb 18 19:44:49 crc kubenswrapper[5007]: E0218 19:44:49.138879 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d50f9d-4285-4115-a7a1-14a8c29672af" containerName="controller-manager" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.138916 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d50f9d-4285-4115-a7a1-14a8c29672af" containerName="controller-manager" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.139176 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d50f9d-4285-4115-a7a1-14a8c29672af" containerName="controller-manager" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.140018 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.145243 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.145259 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.145326 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.145717 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.146088 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.149326 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.161005 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.166426 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8555cdd665-qhsxg"] Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.269254 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-config\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.269532 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-client-ca\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.269683 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9716599-6556-4133-957b-b325e22d0547-serving-cert\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.269733 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pc95\" (UniqueName: \"kubernetes.io/projected/d9716599-6556-4133-957b-b325e22d0547-kube-api-access-7pc95\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.269812 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-proxy-ca-bundles\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.371687 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-proxy-ca-bundles\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.371954 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-config\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.372029 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-client-ca\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.372092 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9716599-6556-4133-957b-b325e22d0547-serving-cert\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.372155 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pc95\" (UniqueName: \"kubernetes.io/projected/d9716599-6556-4133-957b-b325e22d0547-kube-api-access-7pc95\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.373677 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-client-ca\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.373707 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-config\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.374658 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9716599-6556-4133-957b-b325e22d0547-proxy-ca-bundles\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.390612 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9716599-6556-4133-957b-b325e22d0547-serving-cert\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.394887 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pc95\" (UniqueName: \"kubernetes.io/projected/d9716599-6556-4133-957b-b325e22d0547-kube-api-access-7pc95\") pod \"controller-manager-8555cdd665-qhsxg\" (UID: \"d9716599-6556-4133-957b-b325e22d0547\") " pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.465039 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.735472 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8555cdd665-qhsxg"] Feb 18 19:44:49 crc kubenswrapper[5007]: I0218 19:44:49.923460 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d50f9d-4285-4115-a7a1-14a8c29672af" path="/var/lib/kubelet/pods/d5d50f9d-4285-4115-a7a1-14a8c29672af/volumes" Feb 18 19:44:50 crc kubenswrapper[5007]: I0218 19:44:50.249636 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" event={"ID":"d9716599-6556-4133-957b-b325e22d0547","Type":"ContainerStarted","Data":"752b3dc5e50c0e08ebec2077960fea3210e955baf2c1d39b32c87d1a40549b91"} Feb 18 19:44:50 crc kubenswrapper[5007]: I0218 19:44:50.249683 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" event={"ID":"d9716599-6556-4133-957b-b325e22d0547","Type":"ContainerStarted","Data":"a64cc376f7b5597910d1298740773fb47993862648d1b2b979f0c631b515d573"} Feb 18 19:44:50 crc kubenswrapper[5007]: I0218 19:44:50.249940 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:50 crc kubenswrapper[5007]: I0218 19:44:50.285395 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" Feb 18 19:44:50 crc kubenswrapper[5007]: I0218 19:44:50.291885 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8555cdd665-qhsxg" podStartSLOduration=3.291835461 podStartE2EDuration="3.291835461s" podCreationTimestamp="2026-02-18 19:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:44:50.28926649 +0000 UTC m=+375.071386004" watchObservedRunningTime="2026-02-18 19:44:50.291835461 +0000 UTC m=+375.073954995" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.193642 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc"] Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.195946 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.198856 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.199381 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.213809 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc"] Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.259090 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2vj\" (UniqueName: \"kubernetes.io/projected/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-kube-api-access-ww2vj\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.259164 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-config-volume\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.259327 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-secret-volume\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.360935 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-secret-volume\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.361021 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-config-volume\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.361045 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww2vj\" (UniqueName: \"kubernetes.io/projected/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-kube-api-access-ww2vj\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.362073 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-config-volume\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.368380 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-secret-volume\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.379524 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww2vj\" (UniqueName: \"kubernetes.io/projected/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-kube-api-access-ww2vj\") pod \"collect-profiles-29524065-nkgnc\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.536715 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:00 crc kubenswrapper[5007]: I0218 19:45:00.994762 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc"] Feb 18 19:45:01 crc kubenswrapper[5007]: W0218 19:45:01.005922 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e0a8a78_95da_4a3f_8e8d_e4a79a3f7674.slice/crio-6af24ceb299426a8b0196745d4d8b812f7a3117e6a5d8efa0e96bed463bcde77 WatchSource:0}: Error finding container 6af24ceb299426a8b0196745d4d8b812f7a3117e6a5d8efa0e96bed463bcde77: Status 404 returned error can't find the container with id 6af24ceb299426a8b0196745d4d8b812f7a3117e6a5d8efa0e96bed463bcde77 Feb 18 19:45:01 crc kubenswrapper[5007]: I0218 19:45:01.318150 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" event={"ID":"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674","Type":"ContainerStarted","Data":"31ee5ec62d70cf7255ac7c4d7b0ee6da8856602f7ec997e9127bbcd3dd1bbab9"} Feb 18 19:45:01 crc kubenswrapper[5007]: I0218 19:45:01.318636 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" event={"ID":"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674","Type":"ContainerStarted","Data":"6af24ceb299426a8b0196745d4d8b812f7a3117e6a5d8efa0e96bed463bcde77"} Feb 18 19:45:01 crc kubenswrapper[5007]: I0218 19:45:01.343974 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" podStartSLOduration=1.3439492259999999 podStartE2EDuration="1.343949226s" podCreationTimestamp="2026-02-18 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:45:01.340197703 +0000 UTC m=+386.122317277" watchObservedRunningTime="2026-02-18 19:45:01.343949226 +0000 UTC m=+386.126068750" Feb 18 19:45:02 crc kubenswrapper[5007]: I0218 19:45:02.319361 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" podUID="f895f35a-b144-444a-9329-3e7c3d0a1d9b" containerName="registry" containerID="cri-o://9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4" gracePeriod=30 Feb 18 19:45:02 crc kubenswrapper[5007]: I0218 19:45:02.330636 5007 generic.go:334] "Generic (PLEG): container finished" podID="5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674" containerID="31ee5ec62d70cf7255ac7c4d7b0ee6da8856602f7ec997e9127bbcd3dd1bbab9" exitCode=0 Feb 18 19:45:02 crc kubenswrapper[5007]: I0218 19:45:02.330720 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" event={"ID":"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674","Type":"ContainerDied","Data":"31ee5ec62d70cf7255ac7c4d7b0ee6da8856602f7ec997e9127bbcd3dd1bbab9"} Feb 18 19:45:02 crc kubenswrapper[5007]: I0218 19:45:02.885044 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.005901 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f895f35a-b144-444a-9329-3e7c3d0a1d9b-installation-pull-secrets\") pod \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.006022 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f895f35a-b144-444a-9329-3e7c3d0a1d9b-ca-trust-extracted\") pod \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.006290 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.006356 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-certificates\") pod \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.006383 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-bound-sa-token\") pod \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.006457 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-tls\") pod \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.006510 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-trusted-ca\") pod \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.006549 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx94p\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-kube-api-access-qx94p\") pod \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\" (UID: \"f895f35a-b144-444a-9329-3e7c3d0a1d9b\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.008592 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f895f35a-b144-444a-9329-3e7c3d0a1d9b" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.008689 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f895f35a-b144-444a-9329-3e7c3d0a1d9b" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.015258 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f895f35a-b144-444a-9329-3e7c3d0a1d9b" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.015854 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f895f35a-b144-444a-9329-3e7c3d0a1d9b" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.016212 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-kube-api-access-qx94p" (OuterVolumeSpecName: "kube-api-access-qx94p") pod "f895f35a-b144-444a-9329-3e7c3d0a1d9b" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b"). InnerVolumeSpecName "kube-api-access-qx94p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.016340 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f895f35a-b144-444a-9329-3e7c3d0a1d9b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f895f35a-b144-444a-9329-3e7c3d0a1d9b" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.022340 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f895f35a-b144-444a-9329-3e7c3d0a1d9b" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.025986 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f895f35a-b144-444a-9329-3e7c3d0a1d9b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f895f35a-b144-444a-9329-3e7c3d0a1d9b" (UID: "f895f35a-b144-444a-9329-3e7c3d0a1d9b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.108577 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.109032 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx94p\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-kube-api-access-qx94p\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.109161 5007 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f895f35a-b144-444a-9329-3e7c3d0a1d9b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.109292 5007 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f895f35a-b144-444a-9329-3e7c3d0a1d9b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.109563 5007 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.109736 5007 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.109852 5007 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f895f35a-b144-444a-9329-3e7c3d0a1d9b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.343651 5007 generic.go:334] "Generic (PLEG): container finished" podID="f895f35a-b144-444a-9329-3e7c3d0a1d9b" containerID="9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4" exitCode=0 Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.344005 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.345614 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" event={"ID":"f895f35a-b144-444a-9329-3e7c3d0a1d9b","Type":"ContainerDied","Data":"9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4"} Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.345708 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4vmwt" event={"ID":"f895f35a-b144-444a-9329-3e7c3d0a1d9b","Type":"ContainerDied","Data":"c70a0963ee283bf96e88a8f668482f11779d5673d0b9091533573fe5539e1e96"} Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.345741 5007 scope.go:117] "RemoveContainer" containerID="9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.370924 5007 scope.go:117] "RemoveContainer" containerID="9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4" Feb 18 19:45:03 crc kubenswrapper[5007]: E0218 19:45:03.371726 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4\": container with ID starting with 9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4 not found: ID does not exist" containerID="9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.372114 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4"} err="failed to get container status \"9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4\": rpc error: code = NotFound desc = could not find container \"9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4\": container with ID starting with 9e25c25d0845d7f7fe3c033d825b270de6f3b4f596a75ba0bb580f71a1a886a4 not found: ID does not exist" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.415781 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4vmwt"] Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.426375 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4vmwt"] Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.763368 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.818745 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-config-volume\") pod \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.818902 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww2vj\" (UniqueName: \"kubernetes.io/projected/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-kube-api-access-ww2vj\") pod \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.819024 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-secret-volume\") pod \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\" (UID: \"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674\") " Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.820141 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674" (UID: "5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.825566 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674" (UID: "5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.830688 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-kube-api-access-ww2vj" (OuterVolumeSpecName: "kube-api-access-ww2vj") pod "5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674" (UID: "5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674"). InnerVolumeSpecName "kube-api-access-ww2vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.920642 5007 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.920689 5007 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.920703 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww2vj\" (UniqueName: \"kubernetes.io/projected/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674-kube-api-access-ww2vj\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[5007]: I0218 19:45:03.922194 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f895f35a-b144-444a-9329-3e7c3d0a1d9b" path="/var/lib/kubelet/pods/f895f35a-b144-444a-9329-3e7c3d0a1d9b/volumes" Feb 18 19:45:04 crc kubenswrapper[5007]: I0218 19:45:04.358654 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" event={"ID":"5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674","Type":"ContainerDied","Data":"6af24ceb299426a8b0196745d4d8b812f7a3117e6a5d8efa0e96bed463bcde77"} Feb 18 19:45:04 crc kubenswrapper[5007]: I0218 19:45:04.360503 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6af24ceb299426a8b0196745d4d8b812f7a3117e6a5d8efa0e96bed463bcde77" Feb 18 19:45:04 crc kubenswrapper[5007]: I0218 19:45:04.358754 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc" Feb 18 19:45:15 crc kubenswrapper[5007]: I0218 19:45:15.664311 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:45:15 crc kubenswrapper[5007]: I0218 19:45:15.665569 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:45:15 crc kubenswrapper[5007]: I0218 19:45:15.665659 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:45:15 crc kubenswrapper[5007]: I0218 19:45:15.666736 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21a62a8c05493231db44d9e0f17c6cd72f955097074b39625fb84dd10a0c498d"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:45:15 crc kubenswrapper[5007]: I0218 19:45:15.666891 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://21a62a8c05493231db44d9e0f17c6cd72f955097074b39625fb84dd10a0c498d" gracePeriod=600 Feb 18 19:45:16 crc kubenswrapper[5007]: I0218 19:45:16.456971 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="21a62a8c05493231db44d9e0f17c6cd72f955097074b39625fb84dd10a0c498d" exitCode=0 Feb 18 19:45:16 crc kubenswrapper[5007]: I0218 19:45:16.457095 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"21a62a8c05493231db44d9e0f17c6cd72f955097074b39625fb84dd10a0c498d"} Feb 18 19:45:16 crc kubenswrapper[5007]: I0218 19:45:16.457420 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"a165cc76c2e5187ea1399af2eea9deb083731acfe4acb3767cb0941dfb479641"} Feb 18 19:45:16 crc kubenswrapper[5007]: I0218 19:45:16.457483 5007 scope.go:117] "RemoveContainer" containerID="cbd87b284401ca4fbdf028fce657ea857bb83d6b5bd3ffee01e42fead57ee3ec" Feb 18 19:47:36 crc kubenswrapper[5007]: I0218 19:47:36.145576 5007 scope.go:117] "RemoveContainer" containerID="c8727577e45ba9fd5c35895e188d42d023434d8f710ad89ca4d2471bc1521aac" Feb 18 19:47:36 crc kubenswrapper[5007]: I0218 19:47:36.185365 5007 scope.go:117] "RemoveContainer" containerID="677a69f46dfe9e8674ae2fc79854a34b355cb4a38962ceb2aea7c49460d3a951" Feb 18 19:47:45 crc kubenswrapper[5007]: I0218 19:47:45.663688 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:47:45 crc kubenswrapper[5007]: I0218 19:47:45.664749 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:48:15 crc kubenswrapper[5007]: I0218 19:48:15.663985 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:48:15 crc kubenswrapper[5007]: I0218 19:48:15.664831 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.474494 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf"] Feb 18 19:48:34 crc kubenswrapper[5007]: E0218 19:48:34.475382 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674" containerName="collect-profiles" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.475402 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674" containerName="collect-profiles" Feb 18 19:48:34 crc kubenswrapper[5007]: E0218 19:48:34.475452 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f895f35a-b144-444a-9329-3e7c3d0a1d9b" containerName="registry" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.475462 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f895f35a-b144-444a-9329-3e7c3d0a1d9b" containerName="registry" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.475596 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674" containerName="collect-profiles" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.475615 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f895f35a-b144-444a-9329-3e7c3d0a1d9b" containerName="registry" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.476175 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.478457 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.478857 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.479082 5007 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fxlgr" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.503147 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-r9hfd"] Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.504630 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-r9hfd" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.509874 5007 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-s7pnm" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.522140 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-r9hfd"] Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.526239 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cxhmz"] Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.527165 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.529181 5007 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dh79v" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.533482 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cxhmz"] Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.536050 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf"] Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.646407 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qtx\" (UniqueName: \"kubernetes.io/projected/c2466a67-b33b-45d9-90f4-269c3102389f-kube-api-access-q5qtx\") pod \"cert-manager-webhook-687f57d79b-cxhmz\" (UID: \"c2466a67-b33b-45d9-90f4-269c3102389f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.646473 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp9v8\" (UniqueName: \"kubernetes.io/projected/ffa99c93-57c4-4499-9d94-2ca2c77ac286-kube-api-access-rp9v8\") pod \"cert-manager-cainjector-cf98fcc89-s2cxf\" (UID: \"ffa99c93-57c4-4499-9d94-2ca2c77ac286\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.646521 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8z6j\" (UniqueName: \"kubernetes.io/projected/83a1bb41-d8d4-41d5-bcde-0aef8cf503a4-kube-api-access-g8z6j\") pod \"cert-manager-858654f9db-r9hfd\" (UID: \"83a1bb41-d8d4-41d5-bcde-0aef8cf503a4\") " pod="cert-manager/cert-manager-858654f9db-r9hfd" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.748646 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8z6j\" (UniqueName: \"kubernetes.io/projected/83a1bb41-d8d4-41d5-bcde-0aef8cf503a4-kube-api-access-g8z6j\") pod \"cert-manager-858654f9db-r9hfd\" (UID: \"83a1bb41-d8d4-41d5-bcde-0aef8cf503a4\") " pod="cert-manager/cert-manager-858654f9db-r9hfd" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.748821 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qtx\" (UniqueName: \"kubernetes.io/projected/c2466a67-b33b-45d9-90f4-269c3102389f-kube-api-access-q5qtx\") pod \"cert-manager-webhook-687f57d79b-cxhmz\" (UID: \"c2466a67-b33b-45d9-90f4-269c3102389f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.748862 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp9v8\" (UniqueName: \"kubernetes.io/projected/ffa99c93-57c4-4499-9d94-2ca2c77ac286-kube-api-access-rp9v8\") pod \"cert-manager-cainjector-cf98fcc89-s2cxf\" (UID: \"ffa99c93-57c4-4499-9d94-2ca2c77ac286\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.779630 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp9v8\" (UniqueName: \"kubernetes.io/projected/ffa99c93-57c4-4499-9d94-2ca2c77ac286-kube-api-access-rp9v8\") pod \"cert-manager-cainjector-cf98fcc89-s2cxf\" (UID: \"ffa99c93-57c4-4499-9d94-2ca2c77ac286\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.781627 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qtx\" (UniqueName: \"kubernetes.io/projected/c2466a67-b33b-45d9-90f4-269c3102389f-kube-api-access-q5qtx\") pod \"cert-manager-webhook-687f57d79b-cxhmz\" (UID: \"c2466a67-b33b-45d9-90f4-269c3102389f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.793907 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8z6j\" (UniqueName: \"kubernetes.io/projected/83a1bb41-d8d4-41d5-bcde-0aef8cf503a4-kube-api-access-g8z6j\") pod \"cert-manager-858654f9db-r9hfd\" (UID: \"83a1bb41-d8d4-41d5-bcde-0aef8cf503a4\") " pod="cert-manager/cert-manager-858654f9db-r9hfd" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.798014 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.821985 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-r9hfd" Feb 18 19:48:34 crc kubenswrapper[5007]: I0218 19:48:34.842760 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" Feb 18 19:48:35 crc kubenswrapper[5007]: I0218 19:48:35.058932 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-r9hfd"] Feb 18 19:48:35 crc kubenswrapper[5007]: I0218 19:48:35.073088 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:48:35 crc kubenswrapper[5007]: I0218 19:48:35.295086 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-r9hfd" event={"ID":"83a1bb41-d8d4-41d5-bcde-0aef8cf503a4","Type":"ContainerStarted","Data":"907cf88e385a7916fdf6ebce3700754c7e592de73b2684356f5328a0123df306"} Feb 18 19:48:35 crc kubenswrapper[5007]: I0218 19:48:35.351559 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cxhmz"] Feb 18 19:48:35 crc kubenswrapper[5007]: W0218 19:48:35.369775 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2466a67_b33b_45d9_90f4_269c3102389f.slice/crio-4c9b4b5c0fc7eee44de9344d8720722e627bfa2b04e9e8390161e49e6d1da589 WatchSource:0}: Error finding container 4c9b4b5c0fc7eee44de9344d8720722e627bfa2b04e9e8390161e49e6d1da589: Status 404 returned error can't find the container with id 4c9b4b5c0fc7eee44de9344d8720722e627bfa2b04e9e8390161e49e6d1da589 Feb 18 19:48:35 crc kubenswrapper[5007]: I0218 19:48:35.388039 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf"] Feb 18 19:48:35 crc kubenswrapper[5007]: W0218 19:48:35.391285 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffa99c93_57c4_4499_9d94_2ca2c77ac286.slice/crio-59844cb0085b807f5a7309de506fd78e6b1b2529abfc18e49a452b410ad25230 WatchSource:0}: Error finding container 59844cb0085b807f5a7309de506fd78e6b1b2529abfc18e49a452b410ad25230: Status 404 returned error can't find the container with id 59844cb0085b807f5a7309de506fd78e6b1b2529abfc18e49a452b410ad25230 Feb 18 19:48:36 crc kubenswrapper[5007]: I0218 19:48:36.252277 5007 scope.go:117] "RemoveContainer" containerID="14680f08844fcd8db202e94277fd6643744b899bb6d7ac2a6ed23b724918d216" Feb 18 19:48:36 crc kubenswrapper[5007]: I0218 19:48:36.302812 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf" event={"ID":"ffa99c93-57c4-4499-9d94-2ca2c77ac286","Type":"ContainerStarted","Data":"59844cb0085b807f5a7309de506fd78e6b1b2529abfc18e49a452b410ad25230"} Feb 18 19:48:36 crc kubenswrapper[5007]: I0218 19:48:36.304117 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" event={"ID":"c2466a67-b33b-45d9-90f4-269c3102389f","Type":"ContainerStarted","Data":"4c9b4b5c0fc7eee44de9344d8720722e627bfa2b04e9e8390161e49e6d1da589"} Feb 18 19:48:36 crc kubenswrapper[5007]: I0218 19:48:36.357919 5007 scope.go:117] "RemoveContainer" containerID="b52277c81287a57c5c81b0e9418f4e8ce57cbc1e817ed890395352b261bbe494" Feb 18 19:48:36 crc kubenswrapper[5007]: I0218 19:48:36.515378 5007 scope.go:117] "RemoveContainer" containerID="475e8d9d81ec86ccd8fa63a550a3f4174996acc26ecd64e7fc970eeed48a9fe7" Feb 18 19:48:36 crc kubenswrapper[5007]: I0218 19:48:36.649457 5007 scope.go:117] "RemoveContainer" containerID="3455a98e40d89562e40e6e94ba15d161b0158d0cd6a9fbe929a8cc5a96dcdd33" Feb 18 19:48:38 crc kubenswrapper[5007]: I0218 19:48:38.324709 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-r9hfd" event={"ID":"83a1bb41-d8d4-41d5-bcde-0aef8cf503a4","Type":"ContainerStarted","Data":"fc0991e631347b49df95441364ac65dffc99e5e76633f57052505e53b73a5739"} Feb 18 19:48:38 crc kubenswrapper[5007]: I0218 19:48:38.345243 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-r9hfd" podStartSLOduration=1.630892228 podStartE2EDuration="4.34521927s" podCreationTimestamp="2026-02-18 19:48:34 +0000 UTC" firstStartedPulling="2026-02-18 19:48:35.072807757 +0000 UTC m=+599.854927291" lastFinishedPulling="2026-02-18 19:48:37.787134809 +0000 UTC m=+602.569254333" observedRunningTime="2026-02-18 19:48:38.343273723 +0000 UTC m=+603.125393267" watchObservedRunningTime="2026-02-18 19:48:38.34521927 +0000 UTC m=+603.127338794" Feb 18 19:48:39 crc kubenswrapper[5007]: I0218 19:48:39.346290 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf" event={"ID":"ffa99c93-57c4-4499-9d94-2ca2c77ac286","Type":"ContainerStarted","Data":"c0e396bc88c4005558e5d036ca47c7ee5296be36024d95995f65474dac72f637"} Feb 18 19:48:39 crc kubenswrapper[5007]: I0218 19:48:39.350063 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" event={"ID":"c2466a67-b33b-45d9-90f4-269c3102389f","Type":"ContainerStarted","Data":"84f357ffc00de7d8c4e3bda6749d4c0217487893a87e4bf79e69e2ff43f78f16"} Feb 18 19:48:39 crc kubenswrapper[5007]: I0218 19:48:39.350114 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" Feb 18 19:48:39 crc kubenswrapper[5007]: I0218 19:48:39.369348 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s2cxf" podStartSLOduration=1.732695611 podStartE2EDuration="5.369315113s" podCreationTimestamp="2026-02-18 19:48:34 +0000 UTC" firstStartedPulling="2026-02-18 19:48:35.393854224 +0000 UTC m=+600.175973768" lastFinishedPulling="2026-02-18 19:48:39.030473736 +0000 UTC m=+603.812593270" observedRunningTime="2026-02-18 19:48:39.367201671 +0000 UTC m=+604.149321225" watchObservedRunningTime="2026-02-18 19:48:39.369315113 +0000 UTC m=+604.151434647" Feb 18 19:48:39 crc kubenswrapper[5007]: I0218 19:48:39.406085 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" podStartSLOduration=1.691711425 podStartE2EDuration="5.406051896s" podCreationTimestamp="2026-02-18 19:48:34 +0000 UTC" firstStartedPulling="2026-02-18 19:48:35.372907323 +0000 UTC m=+600.155026847" lastFinishedPulling="2026-02-18 19:48:39.087247794 +0000 UTC m=+603.869367318" observedRunningTime="2026-02-18 19:48:39.403221684 +0000 UTC m=+604.185341218" watchObservedRunningTime="2026-02-18 19:48:39.406051896 +0000 UTC m=+604.188171420" Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.577811 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75flr"] Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.578635 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovn-controller" containerID="cri-o://baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201" gracePeriod=30 Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.578782 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="northd" containerID="cri-o://7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78" gracePeriod=30 Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.578911 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kube-rbac-proxy-node" containerID="cri-o://2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07" gracePeriod=30 Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.578947 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovn-acl-logging" containerID="cri-o://67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105" gracePeriod=30 Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.579136 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="sbdb" containerID="cri-o://c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f" gracePeriod=30 Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.579227 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="nbdb" containerID="cri-o://d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746" gracePeriod=30 Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.578887 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91" gracePeriod=30 Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.635003 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" containerID="cri-o://510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" gracePeriod=30 Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.846487 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-cxhmz" Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.960220 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/3.log" Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.962898 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovn-acl-logging/0.log" Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.963348 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovn-controller/0.log" Feb 18 19:48:44 crc kubenswrapper[5007]: I0218 19:48:44.963855 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.003496 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-ovn\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.003596 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.003687 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-var-lib-cni-networks-ovn-kubernetes\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.003712 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-bin\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.003903 5007 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.003937 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.004078 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.014855 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ktgrw"] Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.015300 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="nbdb" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.015361 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="nbdb" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.015411 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovn-acl-logging" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.015507 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovn-acl-logging" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.015578 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.015643 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.015716 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="northd" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.015783 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="northd" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.015854 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="sbdb" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.015919 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="sbdb" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.015984 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016030 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.016082 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016133 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.016186 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kube-rbac-proxy-node" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016249 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kube-rbac-proxy-node" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.016311 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovn-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016364 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovn-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.016460 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016525 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.016583 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016631 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.016679 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kubecfg-setup" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016731 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kubecfg-setup" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016874 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016935 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.016983 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017029 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="kube-rbac-proxy-node" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017081 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovn-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017135 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017190 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovn-acl-logging" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017238 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="northd" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017293 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="sbdb" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017342 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="nbdb" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.017508 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017580 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017736 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.017990 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3b9770-e878-4070-978e-29931dd72c35" containerName="ovnkube-controller" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.019543 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.105084 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-systemd-units\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.105236 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.105331 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-etc-openvswitch\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.105469 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.105650 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.106586 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107239 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89t88\" (UniqueName: \"kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107323 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-log-socket\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107390 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-slash\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107484 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107536 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-systemd\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107566 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107730 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-log-socket" (OuterVolumeSpecName: "log-socket") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107753 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-slash" (OuterVolumeSpecName: "host-slash") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107643 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107583 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-ovn-kubernetes\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.107947 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-openvswitch\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108015 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-config\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108073 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-netns\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108124 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-node-log\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108172 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-kubelet\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108243 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-netd\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108287 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-var-lib-openvswitch\") pod \"de3b9770-e878-4070-978e-29931dd72c35\" (UID: \"de3b9770-e878-4070-978e-29931dd72c35\") " Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108484 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-ovn\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108557 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108070 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108640 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108653 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108643 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovn-node-metrics-cert\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108561 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108730 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108717 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108781 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108755 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-run-ovn-kubernetes\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108736 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-node-log" (OuterVolumeSpecName: "node-log") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.108909 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-cni-bin\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.109186 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-node-log\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.109397 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.109512 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovnkube-config\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.109607 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-env-overrides\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.109793 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-log-socket\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.109910 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-slash\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.109963 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-systemd\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110005 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovnkube-script-lib\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110059 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-kubelet\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110116 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-run-netns\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110171 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-var-lib-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110218 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-systemd-units\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110276 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-cni-netd\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110339 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-etc-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110402 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lmhf\" (UniqueName: \"kubernetes.io/projected/9b066082-e74c-4b33-ab9a-afb9f01aa170-kube-api-access-9lmhf\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110540 5007 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110660 5007 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110724 5007 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110755 5007 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110780 5007 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110804 5007 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110829 5007 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110856 5007 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110878 5007 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de3b9770-e878-4070-978e-29931dd72c35-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110901 5007 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110927 5007 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110949 5007 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.110976 5007 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.111001 5007 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.111018 5007 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.111037 5007 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.114462 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.116687 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88" (OuterVolumeSpecName: "kube-api-access-89t88") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "kube-api-access-89t88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.122106 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "de3b9770-e878-4070-978e-29931dd72c35" (UID: "de3b9770-e878-4070-978e-29931dd72c35"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.212709 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-run-netns\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.212811 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-var-lib-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.212862 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-systemd-units\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.212909 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-cni-netd\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.212951 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-etc-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.212952 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-run-netns\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213022 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-var-lib-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213091 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-cni-netd\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.212996 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lmhf\" (UniqueName: \"kubernetes.io/projected/9b066082-e74c-4b33-ab9a-afb9f01aa170-kube-api-access-9lmhf\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213252 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-ovn\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213028 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-systemd-units\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213309 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213334 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-ovn\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213117 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-etc-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213414 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-openvswitch\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213519 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-run-ovn-kubernetes\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213779 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-cni-bin\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213576 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-run-ovn-kubernetes\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.213939 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovn-node-metrics-cert\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.214154 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-node-log\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.214021 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-cni-bin\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.214342 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-node-log\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.214521 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.214578 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.214650 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovnkube-config\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.215542 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-env-overrides\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.215779 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-log-socket\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.215926 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-slash\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216104 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-systemd\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216265 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovnkube-config\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216264 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovnkube-script-lib\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216374 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-kubelet\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216494 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-slash\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216516 5007 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de3b9770-e878-4070-978e-29931dd72c35-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216545 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89t88\" (UniqueName: \"kubernetes.io/projected/de3b9770-e878-4070-978e-29931dd72c35-kube-api-access-89t88\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216572 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-run-systemd\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216550 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-log-socket\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216611 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b066082-e74c-4b33-ab9a-afb9f01aa170-host-kubelet\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216575 5007 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de3b9770-e878-4070-978e-29931dd72c35-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.216572 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-env-overrides\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.218530 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovnkube-script-lib\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.219266 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b066082-e74c-4b33-ab9a-afb9f01aa170-ovn-node-metrics-cert\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.237897 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lmhf\" (UniqueName: \"kubernetes.io/projected/9b066082-e74c-4b33-ab9a-afb9f01aa170-kube-api-access-9lmhf\") pod \"ovnkube-node-ktgrw\" (UID: \"9b066082-e74c-4b33-ab9a-afb9f01aa170\") " pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.334117 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.398245 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovnkube-controller/3.log" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.402200 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovn-acl-logging/0.log" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.402997 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-75flr_de3b9770-e878-4070-978e-29931dd72c35/ovn-controller/0.log" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404498 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" exitCode=0 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404563 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f" exitCode=0 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404588 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746" exitCode=0 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404610 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78" exitCode=0 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404628 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91" exitCode=0 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404645 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07" exitCode=0 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404634 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404699 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404732 5007 scope.go:117] "RemoveContainer" containerID="510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404662 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105" exitCode=143 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404785 5007 generic.go:334] "Generic (PLEG): container finished" podID="de3b9770-e878-4070-978e-29931dd72c35" containerID="baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201" exitCode=143 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.404710 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405004 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405038 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405069 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405096 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405122 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405148 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405164 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405178 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405192 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405205 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405219 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405232 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405247 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405270 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405290 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405306 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405320 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405334 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405350 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405364 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405379 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405392 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405407 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405421 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405470 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405494 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405510 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405524 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405537 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405551 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405565 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405579 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405595 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405610 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405624 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405646 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75flr" event={"ID":"de3b9770-e878-4070-978e-29931dd72c35","Type":"ContainerDied","Data":"c3b7729002606e0003095c4dea7a88371f7bd7086109b994f79727a533762c1d"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405669 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405685 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.405700 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.410594 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/2.log" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.410649 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.410688 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.410704 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.410722 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.410735 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.410749 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.410763 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.412484 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/1.log" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.412553 5007 generic.go:334] "Generic (PLEG): container finished" podID="9c2fc812-cdd8-415b-a138-b8a3e14fd396" containerID="e7957d56fb4dadc65bfa2816698c583bf7f0be11331ec1ec2793b1354173aec0" exitCode=2 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.412642 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kjbbj" event={"ID":"9c2fc812-cdd8-415b-a138-b8a3e14fd396","Type":"ContainerDied","Data":"e7957d56fb4dadc65bfa2816698c583bf7f0be11331ec1ec2793b1354173aec0"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.412671 5007 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.413508 5007 scope.go:117] "RemoveContainer" containerID="e7957d56fb4dadc65bfa2816698c583bf7f0be11331ec1ec2793b1354173aec0" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.413910 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kjbbj_openshift-multus(9c2fc812-cdd8-415b-a138-b8a3e14fd396)\"" pod="openshift-multus/multus-kjbbj" podUID="9c2fc812-cdd8-415b-a138-b8a3e14fd396" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.415085 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"2416170831ebd6057592db48a8fe33d2cde2db3f2eb6eee4bf6b819b58c064c7"} Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.436074 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.468878 5007 scope.go:117] "RemoveContainer" containerID="c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.489133 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75flr"] Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.493764 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75flr"] Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.500747 5007 scope.go:117] "RemoveContainer" containerID="d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.534804 5007 scope.go:117] "RemoveContainer" containerID="7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.557528 5007 scope.go:117] "RemoveContainer" containerID="deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.576644 5007 scope.go:117] "RemoveContainer" containerID="2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.617793 5007 scope.go:117] "RemoveContainer" containerID="67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.640618 5007 scope.go:117] "RemoveContainer" containerID="baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.663758 5007 scope.go:117] "RemoveContainer" containerID="8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.663992 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.664068 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.664132 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.664862 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a165cc76c2e5187ea1399af2eea9deb083731acfe4acb3767cb0941dfb479641"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.664959 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://a165cc76c2e5187ea1399af2eea9deb083731acfe4acb3767cb0941dfb479641" gracePeriod=600 Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.690410 5007 scope.go:117] "RemoveContainer" containerID="510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.690964 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": container with ID starting with 510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed not found: ID does not exist" containerID="510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.691019 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} err="failed to get container status \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": rpc error: code = NotFound desc = could not find container \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": container with ID starting with 510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.691060 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.691453 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": container with ID starting with ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec not found: ID does not exist" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.691497 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} err="failed to get container status \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": rpc error: code = NotFound desc = could not find container \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": container with ID starting with ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.691518 5007 scope.go:117] "RemoveContainer" containerID="c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.691905 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": container with ID starting with c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f not found: ID does not exist" containerID="c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.692028 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} err="failed to get container status \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": rpc error: code = NotFound desc = could not find container \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": container with ID starting with c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.692119 5007 scope.go:117] "RemoveContainer" containerID="d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.692654 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": container with ID starting with d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746 not found: ID does not exist" containerID="d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.692704 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} err="failed to get container status \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": rpc error: code = NotFound desc = could not find container \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": container with ID starting with d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.692738 5007 scope.go:117] "RemoveContainer" containerID="7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.693240 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": container with ID starting with 7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78 not found: ID does not exist" containerID="7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.693329 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} err="failed to get container status \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": rpc error: code = NotFound desc = could not find container \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": container with ID starting with 7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.693358 5007 scope.go:117] "RemoveContainer" containerID="deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.693877 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": container with ID starting with deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91 not found: ID does not exist" containerID="deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.693912 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} err="failed to get container status \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": rpc error: code = NotFound desc = could not find container \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": container with ID starting with deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.693932 5007 scope.go:117] "RemoveContainer" containerID="2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.694278 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": container with ID starting with 2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07 not found: ID does not exist" containerID="2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.694306 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} err="failed to get container status \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": rpc error: code = NotFound desc = could not find container \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": container with ID starting with 2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.694323 5007 scope.go:117] "RemoveContainer" containerID="67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.694664 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": container with ID starting with 67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105 not found: ID does not exist" containerID="67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.694694 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} err="failed to get container status \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": rpc error: code = NotFound desc = could not find container \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": container with ID starting with 67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.694714 5007 scope.go:117] "RemoveContainer" containerID="baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.695004 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": container with ID starting with baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201 not found: ID does not exist" containerID="baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.695036 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} err="failed to get container status \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": rpc error: code = NotFound desc = could not find container \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": container with ID starting with baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.695055 5007 scope.go:117] "RemoveContainer" containerID="8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065" Feb 18 19:48:45 crc kubenswrapper[5007]: E0218 19:48:45.695339 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": container with ID starting with 8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065 not found: ID does not exist" containerID="8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.695364 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} err="failed to get container status \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": rpc error: code = NotFound desc = could not find container \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": container with ID starting with 8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.695380 5007 scope.go:117] "RemoveContainer" containerID="510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.695714 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} err="failed to get container status \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": rpc error: code = NotFound desc = could not find container \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": container with ID starting with 510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.695749 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.696069 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} err="failed to get container status \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": rpc error: code = NotFound desc = could not find container \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": container with ID starting with ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.696100 5007 scope.go:117] "RemoveContainer" containerID="c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.696771 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} err="failed to get container status \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": rpc error: code = NotFound desc = could not find container \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": container with ID starting with c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.696815 5007 scope.go:117] "RemoveContainer" containerID="d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.697495 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} err="failed to get container status \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": rpc error: code = NotFound desc = could not find container \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": container with ID starting with d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.697532 5007 scope.go:117] "RemoveContainer" containerID="7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.698126 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} err="failed to get container status \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": rpc error: code = NotFound desc = could not find container \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": container with ID starting with 7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.698155 5007 scope.go:117] "RemoveContainer" containerID="deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.698422 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} err="failed to get container status \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": rpc error: code = NotFound desc = could not find container \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": container with ID starting with deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.698474 5007 scope.go:117] "RemoveContainer" containerID="2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.698814 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} err="failed to get container status \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": rpc error: code = NotFound desc = could not find container \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": container with ID starting with 2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.698838 5007 scope.go:117] "RemoveContainer" containerID="67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.699142 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} err="failed to get container status \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": rpc error: code = NotFound desc = could not find container \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": container with ID starting with 67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.699185 5007 scope.go:117] "RemoveContainer" containerID="baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.699606 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} err="failed to get container status \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": rpc error: code = NotFound desc = could not find container \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": container with ID starting with baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.699668 5007 scope.go:117] "RemoveContainer" containerID="8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.700061 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} err="failed to get container status \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": rpc error: code = NotFound desc = could not find container \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": container with ID starting with 8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.700090 5007 scope.go:117] "RemoveContainer" containerID="510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.700447 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} err="failed to get container status \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": rpc error: code = NotFound desc = could not find container \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": container with ID starting with 510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.700477 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.700779 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} err="failed to get container status \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": rpc error: code = NotFound desc = could not find container \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": container with ID starting with ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.700806 5007 scope.go:117] "RemoveContainer" containerID="c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.701108 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} err="failed to get container status \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": rpc error: code = NotFound desc = could not find container \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": container with ID starting with c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.701137 5007 scope.go:117] "RemoveContainer" containerID="d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.701425 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} err="failed to get container status \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": rpc error: code = NotFound desc = could not find container \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": container with ID starting with d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.701461 5007 scope.go:117] "RemoveContainer" containerID="7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.701729 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} err="failed to get container status \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": rpc error: code = NotFound desc = could not find container \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": container with ID starting with 7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.701764 5007 scope.go:117] "RemoveContainer" containerID="deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.702097 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} err="failed to get container status \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": rpc error: code = NotFound desc = could not find container \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": container with ID starting with deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.702130 5007 scope.go:117] "RemoveContainer" containerID="2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.702469 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} err="failed to get container status \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": rpc error: code = NotFound desc = could not find container \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": container with ID starting with 2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.702491 5007 scope.go:117] "RemoveContainer" containerID="67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.702831 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} err="failed to get container status \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": rpc error: code = NotFound desc = could not find container \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": container with ID starting with 67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.702861 5007 scope.go:117] "RemoveContainer" containerID="baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.703168 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} err="failed to get container status \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": rpc error: code = NotFound desc = could not find container \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": container with ID starting with baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.703206 5007 scope.go:117] "RemoveContainer" containerID="8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.703510 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} err="failed to get container status \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": rpc error: code = NotFound desc = could not find container \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": container with ID starting with 8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.703537 5007 scope.go:117] "RemoveContainer" containerID="510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.703846 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} err="failed to get container status \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": rpc error: code = NotFound desc = could not find container \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": container with ID starting with 510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.703885 5007 scope.go:117] "RemoveContainer" containerID="ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.704221 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec"} err="failed to get container status \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": rpc error: code = NotFound desc = could not find container \"ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec\": container with ID starting with ff8ce2b2686619f44f758f7abd19e96509deee0655b827be6ecb8af4c3b9e0ec not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.704250 5007 scope.go:117] "RemoveContainer" containerID="c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.704578 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f"} err="failed to get container status \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": rpc error: code = NotFound desc = could not find container \"c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f\": container with ID starting with c11c9c8761a97ab7a78e5933a2c4baf6c84a511f9c375f4d909382610a1b966f not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.704714 5007 scope.go:117] "RemoveContainer" containerID="d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.705074 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746"} err="failed to get container status \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": rpc error: code = NotFound desc = could not find container \"d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746\": container with ID starting with d032f2d559b1a5344277bed9c0030e59c7b96cb72b9b1b824be07d26f1b87746 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.705214 5007 scope.go:117] "RemoveContainer" containerID="7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.705606 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78"} err="failed to get container status \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": rpc error: code = NotFound desc = could not find container \"7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78\": container with ID starting with 7f815fda91cd7215f9795116c7a2e80e82e5c94e0a63cd7e762b862fe5dd1b78 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.705637 5007 scope.go:117] "RemoveContainer" containerID="deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.705982 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91"} err="failed to get container status \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": rpc error: code = NotFound desc = could not find container \"deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91\": container with ID starting with deaa7fe1dbdb69a64312cd274fc6fdeed061f29949dc439646b3d29130cdcc91 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.706011 5007 scope.go:117] "RemoveContainer" containerID="2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.706306 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07"} err="failed to get container status \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": rpc error: code = NotFound desc = could not find container \"2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07\": container with ID starting with 2497cfcc17ffa3060f02fa3599fca96cd960b9272962f7bab977cd1be56bad07 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.706334 5007 scope.go:117] "RemoveContainer" containerID="67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.706601 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105"} err="failed to get container status \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": rpc error: code = NotFound desc = could not find container \"67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105\": container with ID starting with 67d1ecd924e2ad598f31a1c3a57efe598b60f053108005d21273c5e9eebe5105 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.706623 5007 scope.go:117] "RemoveContainer" containerID="baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.706996 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201"} err="failed to get container status \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": rpc error: code = NotFound desc = could not find container \"baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201\": container with ID starting with baff65edf92ee0109d516233025c32810c0548e003f201798df1f219b1629201 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.707022 5007 scope.go:117] "RemoveContainer" containerID="8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.707287 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065"} err="failed to get container status \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": rpc error: code = NotFound desc = could not find container \"8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065\": container with ID starting with 8de2e6a5bfee1bd771fea159d6cc39b48ded62cbb58d270144430c6685e11065 not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.707312 5007 scope.go:117] "RemoveContainer" containerID="510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.707622 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed"} err="failed to get container status \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": rpc error: code = NotFound desc = could not find container \"510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed\": container with ID starting with 510f96f0b63106c5b8b45a52c22b2919d07b0d575ab63501bd24d27b6745c3ed not found: ID does not exist" Feb 18 19:48:45 crc kubenswrapper[5007]: I0218 19:48:45.923905 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de3b9770-e878-4070-978e-29931dd72c35" path="/var/lib/kubelet/pods/de3b9770-e878-4070-978e-29931dd72c35/volumes" Feb 18 19:48:46 crc kubenswrapper[5007]: I0218 19:48:46.423942 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="a165cc76c2e5187ea1399af2eea9deb083731acfe4acb3767cb0941dfb479641" exitCode=0 Feb 18 19:48:46 crc kubenswrapper[5007]: I0218 19:48:46.424006 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"a165cc76c2e5187ea1399af2eea9deb083731acfe4acb3767cb0941dfb479641"} Feb 18 19:48:46 crc kubenswrapper[5007]: I0218 19:48:46.424451 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"10c20995e1da7316ed8d3f74629b0d4549ee655af8465574d8a6d0f66d48031f"} Feb 18 19:48:46 crc kubenswrapper[5007]: I0218 19:48:46.424484 5007 scope.go:117] "RemoveContainer" containerID="21a62a8c05493231db44d9e0f17c6cd72f955097074b39625fb84dd10a0c498d" Feb 18 19:48:46 crc kubenswrapper[5007]: I0218 19:48:46.427813 5007 generic.go:334] "Generic (PLEG): container finished" podID="9b066082-e74c-4b33-ab9a-afb9f01aa170" containerID="51ef7bf146e1f9e6fc9b5f26ab16cea864c6d43f76670b04b28ec0609c33db1f" exitCode=0 Feb 18 19:48:46 crc kubenswrapper[5007]: I0218 19:48:46.427934 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerDied","Data":"51ef7bf146e1f9e6fc9b5f26ab16cea864c6d43f76670b04b28ec0609c33db1f"} Feb 18 19:48:47 crc kubenswrapper[5007]: I0218 19:48:47.442845 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"a66f691e6481ccac87fff75b09d08244b11d9eef50f25bfff5697fb8476e5d4e"} Feb 18 19:48:47 crc kubenswrapper[5007]: I0218 19:48:47.443613 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"6c4fa21c3231ead90cc65d214f7337f2fba85d5063cd7ab7445337914a6bbe2c"} Feb 18 19:48:47 crc kubenswrapper[5007]: I0218 19:48:47.443638 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"5648b4ed7db9901f3a3c52f7ae0fca889ac9f1adadfff5af0874f4966ada0907"} Feb 18 19:48:47 crc kubenswrapper[5007]: I0218 19:48:47.443654 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"8f04f507f014799c2a8b8d2aafbc47bdfc896e4a657481e7aad5b2364bc4a114"} Feb 18 19:48:47 crc kubenswrapper[5007]: I0218 19:48:47.443669 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"b52a2d8dbc7559a2ffb7a91e078193711ccb66ddcf993db78c1c0a9ae5db7152"} Feb 18 19:48:47 crc kubenswrapper[5007]: I0218 19:48:47.443684 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"e9ff7f83bbf9d3d2513f61b2d2f1fd80a822d744e2a4f50b75a084ecd9742cab"} Feb 18 19:48:50 crc kubenswrapper[5007]: I0218 19:48:50.475876 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"3ded74f4b83b5d6160dc45ba921545d75cdce4411510cb8c1402edcefe89dad2"} Feb 18 19:48:52 crc kubenswrapper[5007]: I0218 19:48:52.498800 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" event={"ID":"9b066082-e74c-4b33-ab9a-afb9f01aa170","Type":"ContainerStarted","Data":"8590432f447556adf9b5e2f6657a63d13d9c320be55a6ae45a23355e892cfaa2"} Feb 18 19:48:52 crc kubenswrapper[5007]: I0218 19:48:52.499344 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:52 crc kubenswrapper[5007]: I0218 19:48:52.527795 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:52 crc kubenswrapper[5007]: I0218 19:48:52.537231 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" podStartSLOduration=7.537211773 podStartE2EDuration="7.537211773s" podCreationTimestamp="2026-02-18 19:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:48:52.533830034 +0000 UTC m=+617.315949568" watchObservedRunningTime="2026-02-18 19:48:52.537211773 +0000 UTC m=+617.319331297" Feb 18 19:48:53 crc kubenswrapper[5007]: I0218 19:48:53.506487 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:53 crc kubenswrapper[5007]: I0218 19:48:53.506634 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:53 crc kubenswrapper[5007]: I0218 19:48:53.550531 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:48:57 crc kubenswrapper[5007]: I0218 19:48:57.909605 5007 scope.go:117] "RemoveContainer" containerID="e7957d56fb4dadc65bfa2816698c583bf7f0be11331ec1ec2793b1354173aec0" Feb 18 19:48:57 crc kubenswrapper[5007]: E0218 19:48:57.910336 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kjbbj_openshift-multus(9c2fc812-cdd8-415b-a138-b8a3e14fd396)\"" pod="openshift-multus/multus-kjbbj" podUID="9c2fc812-cdd8-415b-a138-b8a3e14fd396" Feb 18 19:49:10 crc kubenswrapper[5007]: I0218 19:49:10.910108 5007 scope.go:117] "RemoveContainer" containerID="e7957d56fb4dadc65bfa2816698c583bf7f0be11331ec1ec2793b1354173aec0" Feb 18 19:49:11 crc kubenswrapper[5007]: I0218 19:49:11.657023 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/2.log" Feb 18 19:49:11 crc kubenswrapper[5007]: I0218 19:49:11.657892 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/1.log" Feb 18 19:49:11 crc kubenswrapper[5007]: I0218 19:49:11.657962 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kjbbj" event={"ID":"9c2fc812-cdd8-415b-a138-b8a3e14fd396","Type":"ContainerStarted","Data":"5cd2ae369e62d319ef7a08022c2f6625df46c38112ec8bb5cc0b70e06ce98498"} Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.112699 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5"] Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.114976 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.119186 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.132895 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5"] Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.239684 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb6nl\" (UniqueName: \"kubernetes.io/projected/027a83a0-e299-4063-afdb-272fc0a08b32-kube-api-access-tb6nl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.239769 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.239934 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.341757 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.341906 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb6nl\" (UniqueName: \"kubernetes.io/projected/027a83a0-e299-4063-afdb-272fc0a08b32-kube-api-access-tb6nl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.341939 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.342956 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.342995 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.367908 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb6nl\" (UniqueName: \"kubernetes.io/projected/027a83a0-e299-4063-afdb-272fc0a08b32-kube-api-access-tb6nl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.445129 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: E0218 19:49:12.485314 5007 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5_openshift-marketplace_027a83a0-e299-4063-afdb-272fc0a08b32_0(3d207fd1a8f94b8c7e073ea6dac96feda664b6b0063e7a56aaed5751e9e0c422): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:49:12 crc kubenswrapper[5007]: E0218 19:49:12.485485 5007 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5_openshift-marketplace_027a83a0-e299-4063-afdb-272fc0a08b32_0(3d207fd1a8f94b8c7e073ea6dac96feda664b6b0063e7a56aaed5751e9e0c422): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: E0218 19:49:12.485535 5007 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5_openshift-marketplace_027a83a0-e299-4063-afdb-272fc0a08b32_0(3d207fd1a8f94b8c7e073ea6dac96feda664b6b0063e7a56aaed5751e9e0c422): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: E0218 19:49:12.485633 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5_openshift-marketplace(027a83a0-e299-4063-afdb-272fc0a08b32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5_openshift-marketplace(027a83a0-e299-4063-afdb-272fc0a08b32)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5_openshift-marketplace_027a83a0-e299-4063-afdb-272fc0a08b32_0(3d207fd1a8f94b8c7e073ea6dac96feda664b6b0063e7a56aaed5751e9e0c422): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" podUID="027a83a0-e299-4063-afdb-272fc0a08b32" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.665239 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.666836 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:12 crc kubenswrapper[5007]: I0218 19:49:12.960870 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5"] Feb 18 19:49:12 crc kubenswrapper[5007]: W0218 19:49:12.969239 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod027a83a0_e299_4063_afdb_272fc0a08b32.slice/crio-ae70280471ada7e16633bb56350c9f9fc920480612948cc4dcaa17655ae17e65 WatchSource:0}: Error finding container ae70280471ada7e16633bb56350c9f9fc920480612948cc4dcaa17655ae17e65: Status 404 returned error can't find the container with id ae70280471ada7e16633bb56350c9f9fc920480612948cc4dcaa17655ae17e65 Feb 18 19:49:13 crc kubenswrapper[5007]: I0218 19:49:13.678173 5007 generic.go:334] "Generic (PLEG): container finished" podID="027a83a0-e299-4063-afdb-272fc0a08b32" containerID="e432334483a3a9a312bd16d9ebf9984b2f3e5106cb5cc457c3057b9da55a0af9" exitCode=0 Feb 18 19:49:13 crc kubenswrapper[5007]: I0218 19:49:13.678308 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" event={"ID":"027a83a0-e299-4063-afdb-272fc0a08b32","Type":"ContainerDied","Data":"e432334483a3a9a312bd16d9ebf9984b2f3e5106cb5cc457c3057b9da55a0af9"} Feb 18 19:49:13 crc kubenswrapper[5007]: I0218 19:49:13.678797 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" event={"ID":"027a83a0-e299-4063-afdb-272fc0a08b32","Type":"ContainerStarted","Data":"ae70280471ada7e16633bb56350c9f9fc920480612948cc4dcaa17655ae17e65"} Feb 18 19:49:15 crc kubenswrapper[5007]: I0218 19:49:15.376176 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ktgrw" Feb 18 19:49:16 crc kubenswrapper[5007]: I0218 19:49:16.705186 5007 generic.go:334] "Generic (PLEG): container finished" podID="027a83a0-e299-4063-afdb-272fc0a08b32" containerID="cd5ec4cc85fa6849fffdf1cae4d22b3de275aa832988cfb7721c5a3e67388327" exitCode=0 Feb 18 19:49:16 crc kubenswrapper[5007]: I0218 19:49:16.705286 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" event={"ID":"027a83a0-e299-4063-afdb-272fc0a08b32","Type":"ContainerDied","Data":"cd5ec4cc85fa6849fffdf1cae4d22b3de275aa832988cfb7721c5a3e67388327"} Feb 18 19:49:17 crc kubenswrapper[5007]: I0218 19:49:17.719421 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" event={"ID":"027a83a0-e299-4063-afdb-272fc0a08b32","Type":"ContainerStarted","Data":"e4dd79b11812de5b6713521d5fd717006777c0887751fc8c419f8a1844cdbf39"} Feb 18 19:49:17 crc kubenswrapper[5007]: I0218 19:49:17.760927 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" podStartSLOduration=3.778717343 podStartE2EDuration="5.76088957s" podCreationTimestamp="2026-02-18 19:49:12 +0000 UTC" firstStartedPulling="2026-02-18 19:49:13.681975479 +0000 UTC m=+638.464095043" lastFinishedPulling="2026-02-18 19:49:15.664147746 +0000 UTC m=+640.446267270" observedRunningTime="2026-02-18 19:49:17.752234947 +0000 UTC m=+642.534354501" watchObservedRunningTime="2026-02-18 19:49:17.76088957 +0000 UTC m=+642.543009124" Feb 18 19:49:18 crc kubenswrapper[5007]: I0218 19:49:18.736417 5007 generic.go:334] "Generic (PLEG): container finished" podID="027a83a0-e299-4063-afdb-272fc0a08b32" containerID="e4dd79b11812de5b6713521d5fd717006777c0887751fc8c419f8a1844cdbf39" exitCode=0 Feb 18 19:49:18 crc kubenswrapper[5007]: I0218 19:49:18.736526 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" event={"ID":"027a83a0-e299-4063-afdb-272fc0a08b32","Type":"ContainerDied","Data":"e4dd79b11812de5b6713521d5fd717006777c0887751fc8c419f8a1844cdbf39"} Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.049671 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.183586 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-bundle\") pod \"027a83a0-e299-4063-afdb-272fc0a08b32\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.183762 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-util\") pod \"027a83a0-e299-4063-afdb-272fc0a08b32\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.183877 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb6nl\" (UniqueName: \"kubernetes.io/projected/027a83a0-e299-4063-afdb-272fc0a08b32-kube-api-access-tb6nl\") pod \"027a83a0-e299-4063-afdb-272fc0a08b32\" (UID: \"027a83a0-e299-4063-afdb-272fc0a08b32\") " Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.186413 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-bundle" (OuterVolumeSpecName: "bundle") pod "027a83a0-e299-4063-afdb-272fc0a08b32" (UID: "027a83a0-e299-4063-afdb-272fc0a08b32"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.192998 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/027a83a0-e299-4063-afdb-272fc0a08b32-kube-api-access-tb6nl" (OuterVolumeSpecName: "kube-api-access-tb6nl") pod "027a83a0-e299-4063-afdb-272fc0a08b32" (UID: "027a83a0-e299-4063-afdb-272fc0a08b32"). InnerVolumeSpecName "kube-api-access-tb6nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.205724 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-util" (OuterVolumeSpecName: "util") pod "027a83a0-e299-4063-afdb-272fc0a08b32" (UID: "027a83a0-e299-4063-afdb-272fc0a08b32"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.286642 5007 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.286734 5007 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/027a83a0-e299-4063-afdb-272fc0a08b32-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.286763 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb6nl\" (UniqueName: \"kubernetes.io/projected/027a83a0-e299-4063-afdb-272fc0a08b32-kube-api-access-tb6nl\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.756177 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" event={"ID":"027a83a0-e299-4063-afdb-272fc0a08b32","Type":"ContainerDied","Data":"ae70280471ada7e16633bb56350c9f9fc920480612948cc4dcaa17655ae17e65"} Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.756248 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae70280471ada7e16633bb56350c9f9fc920480612948cc4dcaa17655ae17e65" Feb 18 19:49:20 crc kubenswrapper[5007]: I0218 19:49:20.756315 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fxk5" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.964279 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2"] Feb 18 19:49:29 crc kubenswrapper[5007]: E0218 19:49:29.964970 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027a83a0-e299-4063-afdb-272fc0a08b32" containerName="pull" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.964990 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="027a83a0-e299-4063-afdb-272fc0a08b32" containerName="pull" Feb 18 19:49:29 crc kubenswrapper[5007]: E0218 19:49:29.965008 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027a83a0-e299-4063-afdb-272fc0a08b32" containerName="util" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.965015 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="027a83a0-e299-4063-afdb-272fc0a08b32" containerName="util" Feb 18 19:49:29 crc kubenswrapper[5007]: E0218 19:49:29.965025 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027a83a0-e299-4063-afdb-272fc0a08b32" containerName="extract" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.965033 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="027a83a0-e299-4063-afdb-272fc0a08b32" containerName="extract" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.965156 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="027a83a0-e299-4063-afdb-272fc0a08b32" containerName="extract" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.965682 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.967534 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-ft88g" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.968603 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.973012 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 19:49:29 crc kubenswrapper[5007]: I0218 19:49:29.979996 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.092731 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.093541 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.099342 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.099592 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-vgcsx" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.106341 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.115109 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.116205 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.132168 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plp4r\" (UniqueName: \"kubernetes.io/projected/7b64a6ba-2aed-4e16-bcad-ac42aee06641-kube-api-access-plp4r\") pod \"obo-prometheus-operator-68bc856cb9-prhd2\" (UID: \"7b64a6ba-2aed-4e16-bcad-ac42aee06641\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.179539 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.233467 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adf0680a-3b7d-4c05-8547-7f39264b39e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x\" (UID: \"adf0680a-3b7d-4c05-8547-7f39264b39e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.233545 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15209e0a-d874-4e5a-aac8-af1d0ee1177c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8\" (UID: \"15209e0a-d874-4e5a-aac8-af1d0ee1177c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.233666 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plp4r\" (UniqueName: \"kubernetes.io/projected/7b64a6ba-2aed-4e16-bcad-ac42aee06641-kube-api-access-plp4r\") pod \"obo-prometheus-operator-68bc856cb9-prhd2\" (UID: \"7b64a6ba-2aed-4e16-bcad-ac42aee06641\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.233697 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adf0680a-3b7d-4c05-8547-7f39264b39e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x\" (UID: \"adf0680a-3b7d-4c05-8547-7f39264b39e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.233747 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15209e0a-d874-4e5a-aac8-af1d0ee1177c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8\" (UID: \"15209e0a-d874-4e5a-aac8-af1d0ee1177c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.261154 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plp4r\" (UniqueName: \"kubernetes.io/projected/7b64a6ba-2aed-4e16-bcad-ac42aee06641-kube-api-access-plp4r\") pod \"obo-prometheus-operator-68bc856cb9-prhd2\" (UID: \"7b64a6ba-2aed-4e16-bcad-ac42aee06641\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.324828 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cd2nw"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.325649 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.328375 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.328470 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ghfxv" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.328592 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.334637 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adf0680a-3b7d-4c05-8547-7f39264b39e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x\" (UID: \"adf0680a-3b7d-4c05-8547-7f39264b39e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.334699 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15209e0a-d874-4e5a-aac8-af1d0ee1177c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8\" (UID: \"15209e0a-d874-4e5a-aac8-af1d0ee1177c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.334733 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adf0680a-3b7d-4c05-8547-7f39264b39e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x\" (UID: \"adf0680a-3b7d-4c05-8547-7f39264b39e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.334757 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15209e0a-d874-4e5a-aac8-af1d0ee1177c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8\" (UID: \"15209e0a-d874-4e5a-aac8-af1d0ee1177c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.340580 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15209e0a-d874-4e5a-aac8-af1d0ee1177c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8\" (UID: \"15209e0a-d874-4e5a-aac8-af1d0ee1177c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.342636 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15209e0a-d874-4e5a-aac8-af1d0ee1177c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8\" (UID: \"15209e0a-d874-4e5a-aac8-af1d0ee1177c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.344585 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adf0680a-3b7d-4c05-8547-7f39264b39e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x\" (UID: \"adf0680a-3b7d-4c05-8547-7f39264b39e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.345046 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adf0680a-3b7d-4c05-8547-7f39264b39e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x\" (UID: \"adf0680a-3b7d-4c05-8547-7f39264b39e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.348689 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cd2nw"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.411109 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.432052 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.435776 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1842c036-105d-4c76-be81-32882b1c4158-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cd2nw\" (UID: \"1842c036-105d-4c76-be81-32882b1c4158\") " pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.435901 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvsj\" (UniqueName: \"kubernetes.io/projected/1842c036-105d-4c76-be81-32882b1c4158-kube-api-access-6rvsj\") pod \"observability-operator-59bdc8b94-cd2nw\" (UID: \"1842c036-105d-4c76-be81-32882b1c4158\") " pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.514994 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hkczw"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.515836 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.518925 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-qlvmn" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.536468 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hkczw"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.537569 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvsj\" (UniqueName: \"kubernetes.io/projected/1842c036-105d-4c76-be81-32882b1c4158-kube-api-access-6rvsj\") pod \"observability-operator-59bdc8b94-cd2nw\" (UID: \"1842c036-105d-4c76-be81-32882b1c4158\") " pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.537832 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1842c036-105d-4c76-be81-32882b1c4158-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cd2nw\" (UID: \"1842c036-105d-4c76-be81-32882b1c4158\") " pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.552553 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1842c036-105d-4c76-be81-32882b1c4158-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cd2nw\" (UID: \"1842c036-105d-4c76-be81-32882b1c4158\") " pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.572169 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvsj\" (UniqueName: \"kubernetes.io/projected/1842c036-105d-4c76-be81-32882b1c4158-kube-api-access-6rvsj\") pod \"observability-operator-59bdc8b94-cd2nw\" (UID: \"1842c036-105d-4c76-be81-32882b1c4158\") " pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.640216 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d64d838-3de3-4647-a494-00a5dc88ce92-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hkczw\" (UID: \"6d64d838-3de3-4647-a494-00a5dc88ce92\") " pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.640302 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vttf\" (UniqueName: \"kubernetes.io/projected/6d64d838-3de3-4647-a494-00a5dc88ce92-kube-api-access-2vttf\") pod \"perses-operator-5bf474d74f-hkczw\" (UID: \"6d64d838-3de3-4647-a494-00a5dc88ce92\") " pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.685883 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.708614 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.779099 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d64d838-3de3-4647-a494-00a5dc88ce92-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hkczw\" (UID: \"6d64d838-3de3-4647-a494-00a5dc88ce92\") " pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.779316 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vttf\" (UniqueName: \"kubernetes.io/projected/6d64d838-3de3-4647-a494-00a5dc88ce92-kube-api-access-2vttf\") pod \"perses-operator-5bf474d74f-hkczw\" (UID: \"6d64d838-3de3-4647-a494-00a5dc88ce92\") " pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.782036 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.782923 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d64d838-3de3-4647-a494-00a5dc88ce92-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hkczw\" (UID: \"6d64d838-3de3-4647-a494-00a5dc88ce92\") " pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.809337 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vttf\" (UniqueName: \"kubernetes.io/projected/6d64d838-3de3-4647-a494-00a5dc88ce92-kube-api-access-2vttf\") pod \"perses-operator-5bf474d74f-hkczw\" (UID: \"6d64d838-3de3-4647-a494-00a5dc88ce92\") " pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.824357 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" event={"ID":"adf0680a-3b7d-4c05-8547-7f39264b39e5","Type":"ContainerStarted","Data":"63f9eaef64feff13d1ce215cb29999bd036d3af43a86376cc0407a1a40b25517"} Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.825701 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2" event={"ID":"7b64a6ba-2aed-4e16-bcad-ac42aee06641","Type":"ContainerStarted","Data":"b151758fce3c2d4b1f6952af4bc9ab64b5b22031a0edb42fd3330b4c8d4aad60"} Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.829830 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8"] Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.863358 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:30 crc kubenswrapper[5007]: I0218 19:49:30.995370 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cd2nw"] Feb 18 19:49:31 crc kubenswrapper[5007]: W0218 19:49:31.013850 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1842c036_105d_4c76_be81_32882b1c4158.slice/crio-76b5a5a9b18b42dd2afb7a7a46b8270f52258d9d2ec0b841ed4dc97ee24bcac7 WatchSource:0}: Error finding container 76b5a5a9b18b42dd2afb7a7a46b8270f52258d9d2ec0b841ed4dc97ee24bcac7: Status 404 returned error can't find the container with id 76b5a5a9b18b42dd2afb7a7a46b8270f52258d9d2ec0b841ed4dc97ee24bcac7 Feb 18 19:49:31 crc kubenswrapper[5007]: I0218 19:49:31.101034 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hkczw"] Feb 18 19:49:31 crc kubenswrapper[5007]: W0218 19:49:31.104225 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d64d838_3de3_4647_a494_00a5dc88ce92.slice/crio-ee4f333ae0c0ca66462e3960c1bbc1b66cdbfa5774a85e01eb7528780d0ce3c6 WatchSource:0}: Error finding container ee4f333ae0c0ca66462e3960c1bbc1b66cdbfa5774a85e01eb7528780d0ce3c6: Status 404 returned error can't find the container with id ee4f333ae0c0ca66462e3960c1bbc1b66cdbfa5774a85e01eb7528780d0ce3c6 Feb 18 19:49:31 crc kubenswrapper[5007]: I0218 19:49:31.834668 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" event={"ID":"1842c036-105d-4c76-be81-32882b1c4158","Type":"ContainerStarted","Data":"76b5a5a9b18b42dd2afb7a7a46b8270f52258d9d2ec0b841ed4dc97ee24bcac7"} Feb 18 19:49:31 crc kubenswrapper[5007]: I0218 19:49:31.836361 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-hkczw" event={"ID":"6d64d838-3de3-4647-a494-00a5dc88ce92","Type":"ContainerStarted","Data":"ee4f333ae0c0ca66462e3960c1bbc1b66cdbfa5774a85e01eb7528780d0ce3c6"} Feb 18 19:49:31 crc kubenswrapper[5007]: I0218 19:49:31.838628 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" event={"ID":"15209e0a-d874-4e5a-aac8-af1d0ee1177c","Type":"ContainerStarted","Data":"619f8543cb537d1bb8e961012b83d680f54dbe64dcfc7bf4127e06b9c8d40eff"} Feb 18 19:49:37 crc kubenswrapper[5007]: I0218 19:49:37.646666 5007 scope.go:117] "RemoveContainer" containerID="b55b8cd27548291a3d8d544065d6d311b2c0c87267c7dcec461b243a28ea14d9" Feb 18 19:49:41 crc kubenswrapper[5007]: I0218 19:49:41.921276 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kjbbj_9c2fc812-cdd8-415b-a138-b8a3e14fd396/kube-multus/2.log" Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.929819 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" event={"ID":"1842c036-105d-4c76-be81-32882b1c4158","Type":"ContainerStarted","Data":"e071d186a10c80c71ef4f0fd076ca0a978991cc81297bb768b1eff0801ba8d06"} Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.930279 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.931780 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-hkczw" event={"ID":"6d64d838-3de3-4647-a494-00a5dc88ce92","Type":"ContainerStarted","Data":"1e66d765462e03d3d2e47d924103398674d810fc5ad08d56dbbd8c8e1cb4b697"} Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.931965 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.933230 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2" event={"ID":"7b64a6ba-2aed-4e16-bcad-ac42aee06641","Type":"ContainerStarted","Data":"79fca30927623189ac3741d9a6de4ab3b561270c064ac86845057bbbeaf47505"} Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.934890 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" event={"ID":"15209e0a-d874-4e5a-aac8-af1d0ee1177c","Type":"ContainerStarted","Data":"c7d863a4cb15aacfe2c02d17802c3c6bbeb0458a4821b6ca893bf2965eb936de"} Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.935816 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" event={"ID":"adf0680a-3b7d-4c05-8547-7f39264b39e5","Type":"ContainerStarted","Data":"f3f57a60d1cd9cf3ed1b1a6bdabc135dcfb9a545d18fd23055da90fcbd228646"} Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.996454 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-cd2nw" podStartSLOduration=2.348787266 podStartE2EDuration="12.996416835s" podCreationTimestamp="2026-02-18 19:49:30 +0000 UTC" firstStartedPulling="2026-02-18 19:49:31.028154064 +0000 UTC m=+655.810273588" lastFinishedPulling="2026-02-18 19:49:41.675783633 +0000 UTC m=+666.457903157" observedRunningTime="2026-02-18 19:49:42.967904441 +0000 UTC m=+667.750023965" watchObservedRunningTime="2026-02-18 19:49:42.996416835 +0000 UTC m=+667.778536369" Feb 18 19:49:42 crc kubenswrapper[5007]: I0218 19:49:42.997878 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-prhd2" podStartSLOduration=3.095005332 podStartE2EDuration="13.997867667s" podCreationTimestamp="2026-02-18 19:49:29 +0000 UTC" firstStartedPulling="2026-02-18 19:49:30.801742991 +0000 UTC m=+655.583862505" lastFinishedPulling="2026-02-18 19:49:41.704605306 +0000 UTC m=+666.486724840" observedRunningTime="2026-02-18 19:49:42.992863043 +0000 UTC m=+667.774982577" watchObservedRunningTime="2026-02-18 19:49:42.997867667 +0000 UTC m=+667.779987211" Feb 18 19:49:43 crc kubenswrapper[5007]: I0218 19:49:43.015695 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tx7m8" podStartSLOduration=2.201481889 podStartE2EDuration="13.015666922s" podCreationTimestamp="2026-02-18 19:49:30 +0000 UTC" firstStartedPulling="2026-02-18 19:49:30.86158209 +0000 UTC m=+655.643701614" lastFinishedPulling="2026-02-18 19:49:41.675767123 +0000 UTC m=+666.457886647" observedRunningTime="2026-02-18 19:49:43.009808842 +0000 UTC m=+667.791928386" watchObservedRunningTime="2026-02-18 19:49:43.015666922 +0000 UTC m=+667.797786456" Feb 18 19:49:43 crc kubenswrapper[5007]: I0218 19:49:43.053242 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-hkczw" podStartSLOduration=2.484911969 podStartE2EDuration="13.053213246s" podCreationTimestamp="2026-02-18 19:49:30 +0000 UTC" firstStartedPulling="2026-02-18 19:49:31.107480546 +0000 UTC m=+655.889600070" lastFinishedPulling="2026-02-18 19:49:41.675781823 +0000 UTC m=+666.457901347" observedRunningTime="2026-02-18 19:49:43.049844319 +0000 UTC m=+667.831963863" watchObservedRunningTime="2026-02-18 19:49:43.053213246 +0000 UTC m=+667.835332770" Feb 18 19:49:43 crc kubenswrapper[5007]: I0218 19:49:43.943212 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:50 crc kubenswrapper[5007]: I0218 19:49:50.866956 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-hkczw" Feb 18 19:49:50 crc kubenswrapper[5007]: I0218 19:49:50.897684 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5778fb7f4d-tfl4x" podStartSLOduration=9.960395225 podStartE2EDuration="20.897651123s" podCreationTimestamp="2026-02-18 19:49:30 +0000 UTC" firstStartedPulling="2026-02-18 19:49:30.740680707 +0000 UTC m=+655.522800231" lastFinishedPulling="2026-02-18 19:49:41.677936605 +0000 UTC m=+666.460056129" observedRunningTime="2026-02-18 19:49:43.092026228 +0000 UTC m=+667.874145752" watchObservedRunningTime="2026-02-18 19:49:50.897651123 +0000 UTC m=+675.679770677" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.698795 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t"] Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.700739 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.703276 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.727479 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t"] Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.807453 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.807523 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbpzb\" (UniqueName: \"kubernetes.io/projected/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-kube-api-access-vbpzb\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.807580 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.909806 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.910194 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.910375 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbpzb\" (UniqueName: \"kubernetes.io/projected/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-kube-api-access-vbpzb\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.910495 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.910661 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:08 crc kubenswrapper[5007]: I0218 19:50:08.931791 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbpzb\" (UniqueName: \"kubernetes.io/projected/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-kube-api-access-vbpzb\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:09 crc kubenswrapper[5007]: I0218 19:50:09.023196 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:09 crc kubenswrapper[5007]: I0218 19:50:09.258647 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t"] Feb 18 19:50:10 crc kubenswrapper[5007]: I0218 19:50:10.135062 5007 generic.go:334] "Generic (PLEG): container finished" podID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerID="75cdac971de71c6cccb8f34db0fb06301fa75ff19e96d0da4e6ccdd68231ad3e" exitCode=0 Feb 18 19:50:10 crc kubenswrapper[5007]: I0218 19:50:10.135132 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" event={"ID":"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972","Type":"ContainerDied","Data":"75cdac971de71c6cccb8f34db0fb06301fa75ff19e96d0da4e6ccdd68231ad3e"} Feb 18 19:50:10 crc kubenswrapper[5007]: I0218 19:50:10.135167 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" event={"ID":"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972","Type":"ContainerStarted","Data":"2aaecdfc41365d9cea675a9d6236cabe69febe9c2e483418512bc40635057f4f"} Feb 18 19:50:12 crc kubenswrapper[5007]: I0218 19:50:12.154864 5007 generic.go:334] "Generic (PLEG): container finished" podID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerID="868f6c397de0f34670e8829791e5315a12362b7b1e58cfa9ed5eab467a2aa10a" exitCode=0 Feb 18 19:50:12 crc kubenswrapper[5007]: I0218 19:50:12.154990 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" event={"ID":"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972","Type":"ContainerDied","Data":"868f6c397de0f34670e8829791e5315a12362b7b1e58cfa9ed5eab467a2aa10a"} Feb 18 19:50:13 crc kubenswrapper[5007]: I0218 19:50:13.170822 5007 generic.go:334] "Generic (PLEG): container finished" podID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerID="c490769342734cc57f791f29e7af7fedfe0ecc2ae2841c57104dfd30ac934484" exitCode=0 Feb 18 19:50:13 crc kubenswrapper[5007]: I0218 19:50:13.171043 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" event={"ID":"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972","Type":"ContainerDied","Data":"c490769342734cc57f791f29e7af7fedfe0ecc2ae2841c57104dfd30ac934484"} Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.527418 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.708607 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbpzb\" (UniqueName: \"kubernetes.io/projected/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-kube-api-access-vbpzb\") pod \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.708727 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-bundle\") pod \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.708768 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-util\") pod \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\" (UID: \"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972\") " Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.711251 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-bundle" (OuterVolumeSpecName: "bundle") pod "40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" (UID: "40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.721895 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-util" (OuterVolumeSpecName: "util") pod "40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" (UID: "40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.740866 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-kube-api-access-vbpzb" (OuterVolumeSpecName: "kube-api-access-vbpzb") pod "40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" (UID: "40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972"). InnerVolumeSpecName "kube-api-access-vbpzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.810890 5007 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.810953 5007 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:14 crc kubenswrapper[5007]: I0218 19:50:14.810971 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbpzb\" (UniqueName: \"kubernetes.io/projected/40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972-kube-api-access-vbpzb\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:15 crc kubenswrapper[5007]: I0218 19:50:15.187928 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" event={"ID":"40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972","Type":"ContainerDied","Data":"2aaecdfc41365d9cea675a9d6236cabe69febe9c2e483418512bc40635057f4f"} Feb 18 19:50:15 crc kubenswrapper[5007]: I0218 19:50:15.187987 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aaecdfc41365d9cea675a9d6236cabe69febe9c2e483418512bc40635057f4f" Feb 18 19:50:15 crc kubenswrapper[5007]: I0218 19:50:15.187996 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6q67t" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.279150 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qpgqj"] Feb 18 19:50:17 crc kubenswrapper[5007]: E0218 19:50:17.279807 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerName="pull" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.279823 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerName="pull" Feb 18 19:50:17 crc kubenswrapper[5007]: E0218 19:50:17.279842 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerName="extract" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.279848 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerName="extract" Feb 18 19:50:17 crc kubenswrapper[5007]: E0218 19:50:17.279856 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerName="util" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.279862 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerName="util" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.279976 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="40dbacba-ceb3-43a7-a5c4-9b5fd6a6b972" containerName="extract" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.280534 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-qpgqj" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.283024 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.283060 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.283048 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jjkgf" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.300359 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qpgqj"] Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.448827 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hfq\" (UniqueName: \"kubernetes.io/projected/c4e253b9-896f-4db7-846b-595ee709b278-kube-api-access-r5hfq\") pod \"nmstate-operator-694c9596b7-qpgqj\" (UID: \"c4e253b9-896f-4db7-846b-595ee709b278\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qpgqj" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.550052 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5hfq\" (UniqueName: \"kubernetes.io/projected/c4e253b9-896f-4db7-846b-595ee709b278-kube-api-access-r5hfq\") pod \"nmstate-operator-694c9596b7-qpgqj\" (UID: \"c4e253b9-896f-4db7-846b-595ee709b278\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qpgqj" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.578868 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5hfq\" (UniqueName: \"kubernetes.io/projected/c4e253b9-896f-4db7-846b-595ee709b278-kube-api-access-r5hfq\") pod \"nmstate-operator-694c9596b7-qpgqj\" (UID: \"c4e253b9-896f-4db7-846b-595ee709b278\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qpgqj" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.596568 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-qpgqj" Feb 18 19:50:17 crc kubenswrapper[5007]: I0218 19:50:17.871094 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qpgqj"] Feb 18 19:50:18 crc kubenswrapper[5007]: I0218 19:50:18.217417 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-qpgqj" event={"ID":"c4e253b9-896f-4db7-846b-595ee709b278","Type":"ContainerStarted","Data":"a2f9f4b0f422afc18186190de013f479aea9cfdcb3d113aa962369fee16d4487"} Feb 18 19:50:21 crc kubenswrapper[5007]: I0218 19:50:21.242194 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-qpgqj" event={"ID":"c4e253b9-896f-4db7-846b-595ee709b278","Type":"ContainerStarted","Data":"e24c214f10a45e66dd199861f462159c1ba77e6b88d6d2b013159cb74d79f702"} Feb 18 19:50:21 crc kubenswrapper[5007]: I0218 19:50:21.273517 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-qpgqj" podStartSLOduration=1.711412393 podStartE2EDuration="4.273485919s" podCreationTimestamp="2026-02-18 19:50:17 +0000 UTC" firstStartedPulling="2026-02-18 19:50:17.881982616 +0000 UTC m=+702.664102140" lastFinishedPulling="2026-02-18 19:50:20.444056142 +0000 UTC m=+705.226175666" observedRunningTime="2026-02-18 19:50:21.272227443 +0000 UTC m=+706.054346997" watchObservedRunningTime="2026-02-18 19:50:21.273485919 +0000 UTC m=+706.055605483" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.232282 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.233523 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.236834 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7fn8h" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.248826 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.261346 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.262709 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.267636 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.287538 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ljsvz"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.288705 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.295535 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.405837 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.406651 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.408947 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.409524 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.410834 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pr8rz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.428339 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlnqp\" (UniqueName: \"kubernetes.io/projected/cefbb390-77dd-4d75-93f5-d7aa6c65117e-kube-api-access-vlnqp\") pod \"nmstate-webhook-866bcb46dc-flsmd\" (UID: \"cefbb390-77dd-4d75-93f5-d7aa6c65117e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.428499 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cefbb390-77dd-4d75-93f5-d7aa6c65117e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-flsmd\" (UID: \"cefbb390-77dd-4d75-93f5-d7aa6c65117e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.428642 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66q5q\" (UniqueName: \"kubernetes.io/projected/477d2782-9e42-473b-a942-01133fdf74c5-kube-api-access-66q5q\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.428720 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-nmstate-lock\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.428814 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltqbm\" (UniqueName: \"kubernetes.io/projected/554c03dd-b092-4029-b837-779f40296005-kube-api-access-ltqbm\") pod \"nmstate-metrics-58c85c668d-gwsv6\" (UID: \"554c03dd-b092-4029-b837-779f40296005\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.428889 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-dbus-socket\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.428957 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-ovs-socket\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.446578 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531113 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlnqp\" (UniqueName: \"kubernetes.io/projected/cefbb390-77dd-4d75-93f5-d7aa6c65117e-kube-api-access-vlnqp\") pod \"nmstate-webhook-866bcb46dc-flsmd\" (UID: \"cefbb390-77dd-4d75-93f5-d7aa6c65117e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531167 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cefbb390-77dd-4d75-93f5-d7aa6c65117e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-flsmd\" (UID: \"cefbb390-77dd-4d75-93f5-d7aa6c65117e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531205 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531241 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66q5q\" (UniqueName: \"kubernetes.io/projected/477d2782-9e42-473b-a942-01133fdf74c5-kube-api-access-66q5q\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531271 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-nmstate-lock\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531295 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rswj\" (UniqueName: \"kubernetes.io/projected/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-kube-api-access-2rswj\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531321 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltqbm\" (UniqueName: \"kubernetes.io/projected/554c03dd-b092-4029-b837-779f40296005-kube-api-access-ltqbm\") pod \"nmstate-metrics-58c85c668d-gwsv6\" (UID: \"554c03dd-b092-4029-b837-779f40296005\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531350 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531375 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-dbus-socket\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531390 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-ovs-socket\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531524 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-ovs-socket\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.531578 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-nmstate-lock\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: E0218 19:50:22.532004 5007 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 19:50:22 crc kubenswrapper[5007]: E0218 19:50:22.532065 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cefbb390-77dd-4d75-93f5-d7aa6c65117e-tls-key-pair podName:cefbb390-77dd-4d75-93f5-d7aa6c65117e nodeName:}" failed. No retries permitted until 2026-02-18 19:50:23.032045237 +0000 UTC m=+707.814164761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/cefbb390-77dd-4d75-93f5-d7aa6c65117e-tls-key-pair") pod "nmstate-webhook-866bcb46dc-flsmd" (UID: "cefbb390-77dd-4d75-93f5-d7aa6c65117e") : secret "openshift-nmstate-webhook" not found Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.532097 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/477d2782-9e42-473b-a942-01133fdf74c5-dbus-socket\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.551658 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlnqp\" (UniqueName: \"kubernetes.io/projected/cefbb390-77dd-4d75-93f5-d7aa6c65117e-kube-api-access-vlnqp\") pod \"nmstate-webhook-866bcb46dc-flsmd\" (UID: \"cefbb390-77dd-4d75-93f5-d7aa6c65117e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.556685 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66q5q\" (UniqueName: \"kubernetes.io/projected/477d2782-9e42-473b-a942-01133fdf74c5-kube-api-access-66q5q\") pod \"nmstate-handler-ljsvz\" (UID: \"477d2782-9e42-473b-a942-01133fdf74c5\") " pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.558305 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltqbm\" (UniqueName: \"kubernetes.io/projected/554c03dd-b092-4029-b837-779f40296005-kube-api-access-ltqbm\") pod \"nmstate-metrics-58c85c668d-gwsv6\" (UID: \"554c03dd-b092-4029-b837-779f40296005\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.604554 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.607773 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6d8dc99c67-ktwn5"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.608573 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.626804 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d8dc99c67-ktwn5"] Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.632405 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.632688 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rswj\" (UniqueName: \"kubernetes.io/projected/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-kube-api-access-2rswj\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.632783 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: E0218 19:50:22.633044 5007 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 18 19:50:22 crc kubenswrapper[5007]: E0218 19:50:22.633147 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-plugin-serving-cert podName:4d6c1771-0c3d-49f7-bdec-453c5d58cd80 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:23.133116528 +0000 UTC m=+707.915236052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-4v2p5" (UID: "4d6c1771-0c3d-49f7-bdec-453c5d58cd80") : secret "plugin-serving-cert" not found Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.633680 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: W0218 19:50:22.639378 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod477d2782_9e42_473b_a942_01133fdf74c5.slice/crio-2d432e1dd238be8db31838c17ac68d00782872858af6e0eb578a4a6fa0c135c7 WatchSource:0}: Error finding container 2d432e1dd238be8db31838c17ac68d00782872858af6e0eb578a4a6fa0c135c7: Status 404 returned error can't find the container with id 2d432e1dd238be8db31838c17ac68d00782872858af6e0eb578a4a6fa0c135c7 Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.672534 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rswj\" (UniqueName: \"kubernetes.io/projected/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-kube-api-access-2rswj\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.736623 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cb4de88-c593-4ff8-be63-b4690f310686-console-serving-cert\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.737591 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-service-ca\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.737646 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-trusted-ca-bundle\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.737689 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cb4de88-c593-4ff8-be63-b4690f310686-console-oauth-config\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.737738 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-console-config\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.737777 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwlgd\" (UniqueName: \"kubernetes.io/projected/8cb4de88-c593-4ff8-be63-b4690f310686-kube-api-access-fwlgd\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.737848 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-oauth-serving-cert\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.838395 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-console-config\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.838465 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwlgd\" (UniqueName: \"kubernetes.io/projected/8cb4de88-c593-4ff8-be63-b4690f310686-kube-api-access-fwlgd\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.838507 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-oauth-serving-cert\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.838558 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cb4de88-c593-4ff8-be63-b4690f310686-console-serving-cert\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.838575 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-service-ca\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.838600 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-trusted-ca-bundle\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.838623 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cb4de88-c593-4ff8-be63-b4690f310686-console-oauth-config\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.839806 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-oauth-serving-cert\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.840047 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-service-ca\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.840051 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-trusted-ca-bundle\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.840196 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cb4de88-c593-4ff8-be63-b4690f310686-console-config\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.842990 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cb4de88-c593-4ff8-be63-b4690f310686-console-oauth-config\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.844247 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cb4de88-c593-4ff8-be63-b4690f310686-console-serving-cert\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.856292 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwlgd\" (UniqueName: \"kubernetes.io/projected/8cb4de88-c593-4ff8-be63-b4690f310686-kube-api-access-fwlgd\") pod \"console-6d8dc99c67-ktwn5\" (UID: \"8cb4de88-c593-4ff8-be63-b4690f310686\") " pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.856689 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" Feb 18 19:50:22 crc kubenswrapper[5007]: I0218 19:50:22.969147 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.041542 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cefbb390-77dd-4d75-93f5-d7aa6c65117e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-flsmd\" (UID: \"cefbb390-77dd-4d75-93f5-d7aa6c65117e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.055197 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cefbb390-77dd-4d75-93f5-d7aa6c65117e-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-flsmd\" (UID: \"cefbb390-77dd-4d75-93f5-d7aa6c65117e\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.144591 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.190294 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.229238 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d6c1771-0c3d-49f7-bdec-453c5d58cd80-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4v2p5\" (UID: \"4d6c1771-0c3d-49f7-bdec-453c5d58cd80\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.235321 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6"] Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.261519 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ljsvz" event={"ID":"477d2782-9e42-473b-a942-01133fdf74c5","Type":"ContainerStarted","Data":"2d432e1dd238be8db31838c17ac68d00782872858af6e0eb578a4a6fa0c135c7"} Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.264774 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" event={"ID":"554c03dd-b092-4029-b837-779f40296005","Type":"ContainerStarted","Data":"f943cca449c30a7356c2313cd6b863b6ddf641878fd934a9664625e506d3b603"} Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.332323 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.351881 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d8dc99c67-ktwn5"] Feb 18 19:50:23 crc kubenswrapper[5007]: W0218 19:50:23.361048 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cb4de88_c593_4ff8_be63_b4690f310686.slice/crio-e3e32124ee9427461478944987117f475b59edbb9754e9b6c8c8f63f0a56661e WatchSource:0}: Error finding container e3e32124ee9427461478944987117f475b59edbb9754e9b6c8c8f63f0a56661e: Status 404 returned error can't find the container with id e3e32124ee9427461478944987117f475b59edbb9754e9b6c8c8f63f0a56661e Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.429354 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd"] Feb 18 19:50:23 crc kubenswrapper[5007]: W0218 19:50:23.433815 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcefbb390_77dd_4d75_93f5_d7aa6c65117e.slice/crio-f87f8d582dfb3e8749e3b390b4dc588dfa7340e3f91b468acf3cb8caea89fd40 WatchSource:0}: Error finding container f87f8d582dfb3e8749e3b390b4dc588dfa7340e3f91b468acf3cb8caea89fd40: Status 404 returned error can't find the container with id f87f8d582dfb3e8749e3b390b4dc588dfa7340e3f91b468acf3cb8caea89fd40 Feb 18 19:50:23 crc kubenswrapper[5007]: I0218 19:50:23.523964 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5"] Feb 18 19:50:23 crc kubenswrapper[5007]: W0218 19:50:23.529877 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d6c1771_0c3d_49f7_bdec_453c5d58cd80.slice/crio-994d15224287c3a288b4edb663ec93757256c591cf9ee69af9daa8e86b5899c3 WatchSource:0}: Error finding container 994d15224287c3a288b4edb663ec93757256c591cf9ee69af9daa8e86b5899c3: Status 404 returned error can't find the container with id 994d15224287c3a288b4edb663ec93757256c591cf9ee69af9daa8e86b5899c3 Feb 18 19:50:24 crc kubenswrapper[5007]: I0218 19:50:24.275780 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d8dc99c67-ktwn5" event={"ID":"8cb4de88-c593-4ff8-be63-b4690f310686","Type":"ContainerStarted","Data":"878ada0bc4dbcdac045d2210de91f18a8847641d95172ebb2b651a71c1e1d73b"} Feb 18 19:50:24 crc kubenswrapper[5007]: I0218 19:50:24.275892 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d8dc99c67-ktwn5" event={"ID":"8cb4de88-c593-4ff8-be63-b4690f310686","Type":"ContainerStarted","Data":"e3e32124ee9427461478944987117f475b59edbb9754e9b6c8c8f63f0a56661e"} Feb 18 19:50:24 crc kubenswrapper[5007]: I0218 19:50:24.278355 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" event={"ID":"cefbb390-77dd-4d75-93f5-d7aa6c65117e","Type":"ContainerStarted","Data":"f87f8d582dfb3e8749e3b390b4dc588dfa7340e3f91b468acf3cb8caea89fd40"} Feb 18 19:50:24 crc kubenswrapper[5007]: I0218 19:50:24.281353 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" event={"ID":"4d6c1771-0c3d-49f7-bdec-453c5d58cd80","Type":"ContainerStarted","Data":"994d15224287c3a288b4edb663ec93757256c591cf9ee69af9daa8e86b5899c3"} Feb 18 19:50:24 crc kubenswrapper[5007]: I0218 19:50:24.298485 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6d8dc99c67-ktwn5" podStartSLOduration=2.29845986 podStartE2EDuration="2.29845986s" podCreationTimestamp="2026-02-18 19:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:50:24.295232897 +0000 UTC m=+709.077352421" watchObservedRunningTime="2026-02-18 19:50:24.29845986 +0000 UTC m=+709.080579384" Feb 18 19:50:26 crc kubenswrapper[5007]: I0218 19:50:26.304933 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" event={"ID":"cefbb390-77dd-4d75-93f5-d7aa6c65117e","Type":"ContainerStarted","Data":"63923ad6d79994a4792bb02a15b5fe1031148050a1b5144814f91b2079ed03ed"} Feb 18 19:50:26 crc kubenswrapper[5007]: I0218 19:50:26.305666 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:26 crc kubenswrapper[5007]: I0218 19:50:26.309643 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ljsvz" event={"ID":"477d2782-9e42-473b-a942-01133fdf74c5","Type":"ContainerStarted","Data":"257712be8f1cf5f0d2e889b0af7524c4443b61ba63f33118e174ebfc57fd8fb9"} Feb 18 19:50:26 crc kubenswrapper[5007]: I0218 19:50:26.309797 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:26 crc kubenswrapper[5007]: I0218 19:50:26.314320 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" event={"ID":"554c03dd-b092-4029-b837-779f40296005","Type":"ContainerStarted","Data":"03dc5ac75c7abd8e815bc7a8f06d03c7a15336a16cc9b02febeef9cdfee4a75d"} Feb 18 19:50:26 crc kubenswrapper[5007]: I0218 19:50:26.332009 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" podStartSLOduration=2.553784249 podStartE2EDuration="4.331989162s" podCreationTimestamp="2026-02-18 19:50:22 +0000 UTC" firstStartedPulling="2026-02-18 19:50:23.436628577 +0000 UTC m=+708.218748101" lastFinishedPulling="2026-02-18 19:50:25.2148335 +0000 UTC m=+709.996953014" observedRunningTime="2026-02-18 19:50:26.326082441 +0000 UTC m=+711.108201995" watchObservedRunningTime="2026-02-18 19:50:26.331989162 +0000 UTC m=+711.114108686" Feb 18 19:50:26 crc kubenswrapper[5007]: I0218 19:50:26.356497 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ljsvz" podStartSLOduration=1.7852786520000001 podStartE2EDuration="4.35647773s" podCreationTimestamp="2026-02-18 19:50:22 +0000 UTC" firstStartedPulling="2026-02-18 19:50:22.642106628 +0000 UTC m=+707.424226152" lastFinishedPulling="2026-02-18 19:50:25.213305706 +0000 UTC m=+709.995425230" observedRunningTime="2026-02-18 19:50:26.350905039 +0000 UTC m=+711.133024603" watchObservedRunningTime="2026-02-18 19:50:26.35647773 +0000 UTC m=+711.138597254" Feb 18 19:50:27 crc kubenswrapper[5007]: I0218 19:50:27.324132 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" event={"ID":"4d6c1771-0c3d-49f7-bdec-453c5d58cd80","Type":"ContainerStarted","Data":"702bafeef0a8ecbb7cbd3a40b2f9c4e95e968c9b458f794f174cdfff8c13dcfd"} Feb 18 19:50:27 crc kubenswrapper[5007]: I0218 19:50:27.352034 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4v2p5" podStartSLOduration=2.5691873640000003 podStartE2EDuration="5.352008927s" podCreationTimestamp="2026-02-18 19:50:22 +0000 UTC" firstStartedPulling="2026-02-18 19:50:23.533317761 +0000 UTC m=+708.315437285" lastFinishedPulling="2026-02-18 19:50:26.316139324 +0000 UTC m=+711.098258848" observedRunningTime="2026-02-18 19:50:27.343675427 +0000 UTC m=+712.125794961" watchObservedRunningTime="2026-02-18 19:50:27.352008927 +0000 UTC m=+712.134128461" Feb 18 19:50:28 crc kubenswrapper[5007]: I0218 19:50:28.335453 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" event={"ID":"554c03dd-b092-4029-b837-779f40296005","Type":"ContainerStarted","Data":"612db77a31097ea122a3b3be6b161838f580a0fe2458f368886c67d0156b2792"} Feb 18 19:50:28 crc kubenswrapper[5007]: I0218 19:50:28.362235 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gwsv6" podStartSLOduration=2.073603983 podStartE2EDuration="6.362203638s" podCreationTimestamp="2026-02-18 19:50:22 +0000 UTC" firstStartedPulling="2026-02-18 19:50:23.241307103 +0000 UTC m=+708.023426627" lastFinishedPulling="2026-02-18 19:50:27.529906758 +0000 UTC m=+712.312026282" observedRunningTime="2026-02-18 19:50:28.360827538 +0000 UTC m=+713.142947062" watchObservedRunningTime="2026-02-18 19:50:28.362203638 +0000 UTC m=+713.144323202" Feb 18 19:50:32 crc kubenswrapper[5007]: I0218 19:50:32.652998 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ljsvz" Feb 18 19:50:32 crc kubenswrapper[5007]: I0218 19:50:32.970323 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:32 crc kubenswrapper[5007]: I0218 19:50:32.970423 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:32 crc kubenswrapper[5007]: I0218 19:50:32.979201 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:33 crc kubenswrapper[5007]: I0218 19:50:33.384185 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6d8dc99c67-ktwn5" Feb 18 19:50:33 crc kubenswrapper[5007]: I0218 19:50:33.449924 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5kxmr"] Feb 18 19:50:43 crc kubenswrapper[5007]: I0218 19:50:43.203604 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-flsmd" Feb 18 19:50:45 crc kubenswrapper[5007]: I0218 19:50:45.664020 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:50:45 crc kubenswrapper[5007]: I0218 19:50:45.664653 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:50:58 crc kubenswrapper[5007]: I0218 19:50:58.523173 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5kxmr" podUID="7afb27dc-b979-45ec-ae79-5595d0dedb6e" containerName="console" containerID="cri-o://55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604" gracePeriod=15 Feb 18 19:50:58 crc kubenswrapper[5007]: I0218 19:50:58.896707 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5kxmr_7afb27dc-b979-45ec-ae79-5595d0dedb6e/console/0.log" Feb 18 19:50:58 crc kubenswrapper[5007]: I0218 19:50:58.896948 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.066307 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6t22\" (UniqueName: \"kubernetes.io/projected/7afb27dc-b979-45ec-ae79-5595d0dedb6e-kube-api-access-n6t22\") pod \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.066754 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-trusted-ca-bundle\") pod \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.066837 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-oauth-config\") pod \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.066882 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-serving-cert\") pod \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.066927 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-service-ca\") pod \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.066964 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-oauth-serving-cert\") pod \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.067020 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-config\") pod \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\" (UID: \"7afb27dc-b979-45ec-ae79-5595d0dedb6e\") " Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.068007 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7afb27dc-b979-45ec-ae79-5595d0dedb6e" (UID: "7afb27dc-b979-45ec-ae79-5595d0dedb6e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.068349 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-config" (OuterVolumeSpecName: "console-config") pod "7afb27dc-b979-45ec-ae79-5595d0dedb6e" (UID: "7afb27dc-b979-45ec-ae79-5595d0dedb6e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.068446 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-service-ca" (OuterVolumeSpecName: "service-ca") pod "7afb27dc-b979-45ec-ae79-5595d0dedb6e" (UID: "7afb27dc-b979-45ec-ae79-5595d0dedb6e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.068559 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7afb27dc-b979-45ec-ae79-5595d0dedb6e" (UID: "7afb27dc-b979-45ec-ae79-5595d0dedb6e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.073078 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afb27dc-b979-45ec-ae79-5595d0dedb6e-kube-api-access-n6t22" (OuterVolumeSpecName: "kube-api-access-n6t22") pod "7afb27dc-b979-45ec-ae79-5595d0dedb6e" (UID: "7afb27dc-b979-45ec-ae79-5595d0dedb6e"). InnerVolumeSpecName "kube-api-access-n6t22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.076139 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7afb27dc-b979-45ec-ae79-5595d0dedb6e" (UID: "7afb27dc-b979-45ec-ae79-5595d0dedb6e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.077944 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7afb27dc-b979-45ec-ae79-5595d0dedb6e" (UID: "7afb27dc-b979-45ec-ae79-5595d0dedb6e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.168098 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6t22\" (UniqueName: \"kubernetes.io/projected/7afb27dc-b979-45ec-ae79-5595d0dedb6e-kube-api-access-n6t22\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.168135 5007 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.168145 5007 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.168154 5007 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.168178 5007 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.168189 5007 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.168200 5007 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7afb27dc-b979-45ec-ae79-5595d0dedb6e-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.625719 5007 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5kxmr_7afb27dc-b979-45ec-ae79-5595d0dedb6e/console/0.log" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.626622 5007 generic.go:334] "Generic (PLEG): container finished" podID="7afb27dc-b979-45ec-ae79-5595d0dedb6e" containerID="55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604" exitCode=2 Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.626688 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5kxmr" event={"ID":"7afb27dc-b979-45ec-ae79-5595d0dedb6e","Type":"ContainerDied","Data":"55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604"} Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.626730 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5kxmr" event={"ID":"7afb27dc-b979-45ec-ae79-5595d0dedb6e","Type":"ContainerDied","Data":"8a49c4e16cbc996c5e75e7361ab054ea4ed928c107a4f7004abac8d7ee8c42d2"} Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.626762 5007 scope.go:117] "RemoveContainer" containerID="55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.626965 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5kxmr" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.649137 5007 scope.go:117] "RemoveContainer" containerID="55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604" Feb 18 19:50:59 crc kubenswrapper[5007]: E0218 19:50:59.649742 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604\": container with ID starting with 55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604 not found: ID does not exist" containerID="55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.649793 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604"} err="failed to get container status \"55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604\": rpc error: code = NotFound desc = could not find container \"55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604\": container with ID starting with 55ff16324b66cbbedd1ff425adf4d3a6dd85ae18963e4a010d1cfbc40720f604 not found: ID does not exist" Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.666679 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5kxmr"] Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.670499 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5kxmr"] Feb 18 19:50:59 crc kubenswrapper[5007]: I0218 19:50:59.916808 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afb27dc-b979-45ec-ae79-5595d0dedb6e" path="/var/lib/kubelet/pods/7afb27dc-b979-45ec-ae79-5595d0dedb6e/volumes" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.448514 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m"] Feb 18 19:51:00 crc kubenswrapper[5007]: E0218 19:51:00.448883 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afb27dc-b979-45ec-ae79-5595d0dedb6e" containerName="console" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.448902 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afb27dc-b979-45ec-ae79-5595d0dedb6e" containerName="console" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.449051 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="7afb27dc-b979-45ec-ae79-5595d0dedb6e" containerName="console" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.450223 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.452277 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.471988 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m"] Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.484480 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwc99\" (UniqueName: \"kubernetes.io/projected/ed5bd97d-268e-434e-b173-ebe73f543eca-kube-api-access-lwc99\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.484759 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.484858 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.585993 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwc99\" (UniqueName: \"kubernetes.io/projected/ed5bd97d-268e-434e-b173-ebe73f543eca-kube-api-access-lwc99\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.586555 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.586580 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.587252 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.587283 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.605136 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwc99\" (UniqueName: \"kubernetes.io/projected/ed5bd97d-268e-434e-b173-ebe73f543eca-kube-api-access-lwc99\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:00 crc kubenswrapper[5007]: I0218 19:51:00.767776 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:01 crc kubenswrapper[5007]: I0218 19:51:01.037510 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m"] Feb 18 19:51:01 crc kubenswrapper[5007]: I0218 19:51:01.658048 5007 generic.go:334] "Generic (PLEG): container finished" podID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerID="c1e1d9a55206a210c3434d1ee9674317e52b60e741ef163c3a4bcc9c909e1c98" exitCode=0 Feb 18 19:51:01 crc kubenswrapper[5007]: I0218 19:51:01.658110 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" event={"ID":"ed5bd97d-268e-434e-b173-ebe73f543eca","Type":"ContainerDied","Data":"c1e1d9a55206a210c3434d1ee9674317e52b60e741ef163c3a4bcc9c909e1c98"} Feb 18 19:51:01 crc kubenswrapper[5007]: I0218 19:51:01.658141 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" event={"ID":"ed5bd97d-268e-434e-b173-ebe73f543eca","Type":"ContainerStarted","Data":"7170a95de0c3ff7d8becde311902cf4b9450d27e69d92ef62c4b5dda7917de0a"} Feb 18 19:51:03 crc kubenswrapper[5007]: I0218 19:51:03.673580 5007 generic.go:334] "Generic (PLEG): container finished" podID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerID="c159cf494b645dccd4d41614b8403b9b2307f42dd36f1cec189b56ab3f6538ef" exitCode=0 Feb 18 19:51:03 crc kubenswrapper[5007]: I0218 19:51:03.673641 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" event={"ID":"ed5bd97d-268e-434e-b173-ebe73f543eca","Type":"ContainerDied","Data":"c159cf494b645dccd4d41614b8403b9b2307f42dd36f1cec189b56ab3f6538ef"} Feb 18 19:51:04 crc kubenswrapper[5007]: I0218 19:51:04.684921 5007 generic.go:334] "Generic (PLEG): container finished" podID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerID="c7d9e9f663a17411cd7e0c943253c5652ea14c4dd9b37964e30f9a5390fa8eb9" exitCode=0 Feb 18 19:51:04 crc kubenswrapper[5007]: I0218 19:51:04.684983 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" event={"ID":"ed5bd97d-268e-434e-b173-ebe73f543eca","Type":"ContainerDied","Data":"c7d9e9f663a17411cd7e0c943253c5652ea14c4dd9b37964e30f9a5390fa8eb9"} Feb 18 19:51:05 crc kubenswrapper[5007]: I0218 19:51:05.970928 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:05 crc kubenswrapper[5007]: I0218 19:51:05.989104 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwc99\" (UniqueName: \"kubernetes.io/projected/ed5bd97d-268e-434e-b173-ebe73f543eca-kube-api-access-lwc99\") pod \"ed5bd97d-268e-434e-b173-ebe73f543eca\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " Feb 18 19:51:05 crc kubenswrapper[5007]: I0218 19:51:05.989167 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-bundle\") pod \"ed5bd97d-268e-434e-b173-ebe73f543eca\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " Feb 18 19:51:05 crc kubenswrapper[5007]: I0218 19:51:05.989314 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-util\") pod \"ed5bd97d-268e-434e-b173-ebe73f543eca\" (UID: \"ed5bd97d-268e-434e-b173-ebe73f543eca\") " Feb 18 19:51:05 crc kubenswrapper[5007]: I0218 19:51:05.992747 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-bundle" (OuterVolumeSpecName: "bundle") pod "ed5bd97d-268e-434e-b173-ebe73f543eca" (UID: "ed5bd97d-268e-434e-b173-ebe73f543eca"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:51:06 crc kubenswrapper[5007]: I0218 19:51:06.003718 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5bd97d-268e-434e-b173-ebe73f543eca-kube-api-access-lwc99" (OuterVolumeSpecName: "kube-api-access-lwc99") pod "ed5bd97d-268e-434e-b173-ebe73f543eca" (UID: "ed5bd97d-268e-434e-b173-ebe73f543eca"). InnerVolumeSpecName "kube-api-access-lwc99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:06 crc kubenswrapper[5007]: I0218 19:51:06.014032 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-util" (OuterVolumeSpecName: "util") pod "ed5bd97d-268e-434e-b173-ebe73f543eca" (UID: "ed5bd97d-268e-434e-b173-ebe73f543eca"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:51:06 crc kubenswrapper[5007]: I0218 19:51:06.090703 5007 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:06 crc kubenswrapper[5007]: I0218 19:51:06.090744 5007 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed5bd97d-268e-434e-b173-ebe73f543eca-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:06 crc kubenswrapper[5007]: I0218 19:51:06.090762 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwc99\" (UniqueName: \"kubernetes.io/projected/ed5bd97d-268e-434e-b173-ebe73f543eca-kube-api-access-lwc99\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:06 crc kubenswrapper[5007]: I0218 19:51:06.721214 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" event={"ID":"ed5bd97d-268e-434e-b173-ebe73f543eca","Type":"ContainerDied","Data":"7170a95de0c3ff7d8becde311902cf4b9450d27e69d92ef62c4b5dda7917de0a"} Feb 18 19:51:06 crc kubenswrapper[5007]: I0218 19:51:06.721262 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7170a95de0c3ff7d8becde311902cf4b9450d27e69d92ef62c4b5dda7917de0a" Feb 18 19:51:06 crc kubenswrapper[5007]: I0218 19:51:06.721317 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2135s42m" Feb 18 19:51:10 crc kubenswrapper[5007]: I0218 19:51:10.530544 5007 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.396734 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8"] Feb 18 19:51:15 crc kubenswrapper[5007]: E0218 19:51:15.398645 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerName="pull" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.398794 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerName="pull" Feb 18 19:51:15 crc kubenswrapper[5007]: E0218 19:51:15.398945 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerName="util" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.399050 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerName="util" Feb 18 19:51:15 crc kubenswrapper[5007]: E0218 19:51:15.399142 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerName="extract" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.399235 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerName="extract" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.399552 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5bd97d-268e-434e-b173-ebe73f543eca" containerName="extract" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.401104 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.402980 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.403606 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.404359 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.404359 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-96bqw" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.404450 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.423405 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8"] Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.476329 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f2132a4-bb41-4ba9-9e07-71a4978001c4-webhook-cert\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.476861 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f2132a4-bb41-4ba9-9e07-71a4978001c4-apiservice-cert\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.476900 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzzj\" (UniqueName: \"kubernetes.io/projected/4f2132a4-bb41-4ba9-9e07-71a4978001c4-kube-api-access-hqzzj\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.577603 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f2132a4-bb41-4ba9-9e07-71a4978001c4-apiservice-cert\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.577661 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzzj\" (UniqueName: \"kubernetes.io/projected/4f2132a4-bb41-4ba9-9e07-71a4978001c4-kube-api-access-hqzzj\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.577770 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f2132a4-bb41-4ba9-9e07-71a4978001c4-webhook-cert\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.592318 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f2132a4-bb41-4ba9-9e07-71a4978001c4-webhook-cert\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.592762 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f2132a4-bb41-4ba9-9e07-71a4978001c4-apiservice-cert\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.602690 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzzj\" (UniqueName: \"kubernetes.io/projected/4f2132a4-bb41-4ba9-9e07-71a4978001c4-kube-api-access-hqzzj\") pod \"metallb-operator-controller-manager-8688dbc7bc-tt8g8\" (UID: \"4f2132a4-bb41-4ba9-9e07-71a4978001c4\") " pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.664176 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.664242 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.720800 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.757144 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7"] Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.758108 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.766355 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.766929 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-j7xjt" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.767119 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.775652 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7"] Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.779704 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhk5h\" (UniqueName: \"kubernetes.io/projected/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-kube-api-access-qhk5h\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.779784 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-webhook-cert\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.779822 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.881691 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.883544 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhk5h\" (UniqueName: \"kubernetes.io/projected/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-kube-api-access-qhk5h\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.883946 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-webhook-cert\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.896256 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.907561 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-webhook-cert\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:15 crc kubenswrapper[5007]: I0218 19:51:15.917053 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhk5h\" (UniqueName: \"kubernetes.io/projected/a4cf3401-ae1c-41d6-9b48-abb500c6f46e-kube-api-access-qhk5h\") pod \"metallb-operator-webhook-server-6b85788bf7-f7bx7\" (UID: \"a4cf3401-ae1c-41d6-9b48-abb500c6f46e\") " pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:16 crc kubenswrapper[5007]: I0218 19:51:16.043775 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8"] Feb 18 19:51:16 crc kubenswrapper[5007]: I0218 19:51:16.122036 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:16 crc kubenswrapper[5007]: I0218 19:51:16.722817 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7"] Feb 18 19:51:16 crc kubenswrapper[5007]: W0218 19:51:16.729723 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4cf3401_ae1c_41d6_9b48_abb500c6f46e.slice/crio-fc988a6a25f49f7d5f0cef5e2dd2753832a66ff3d1982f53ab71b7aa2d164b76 WatchSource:0}: Error finding container fc988a6a25f49f7d5f0cef5e2dd2753832a66ff3d1982f53ab71b7aa2d164b76: Status 404 returned error can't find the container with id fc988a6a25f49f7d5f0cef5e2dd2753832a66ff3d1982f53ab71b7aa2d164b76 Feb 18 19:51:16 crc kubenswrapper[5007]: I0218 19:51:16.795701 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" event={"ID":"a4cf3401-ae1c-41d6-9b48-abb500c6f46e","Type":"ContainerStarted","Data":"fc988a6a25f49f7d5f0cef5e2dd2753832a66ff3d1982f53ab71b7aa2d164b76"} Feb 18 19:51:16 crc kubenswrapper[5007]: I0218 19:51:16.796792 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" event={"ID":"4f2132a4-bb41-4ba9-9e07-71a4978001c4","Type":"ContainerStarted","Data":"b20bfb989f2848578e729493bd18a579d8674d3b4273840a6574eb4913db8e86"} Feb 18 19:51:19 crc kubenswrapper[5007]: I0218 19:51:19.819218 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" event={"ID":"4f2132a4-bb41-4ba9-9e07-71a4978001c4","Type":"ContainerStarted","Data":"44e4101061d249c76e6910ca8d33acc39ff2ffcc65203470868ac48e0dd2f5b9"} Feb 18 19:51:19 crc kubenswrapper[5007]: I0218 19:51:19.819669 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:19 crc kubenswrapper[5007]: I0218 19:51:19.847528 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" podStartSLOduration=1.828230652 podStartE2EDuration="4.847510518s" podCreationTimestamp="2026-02-18 19:51:15 +0000 UTC" firstStartedPulling="2026-02-18 19:51:16.072988933 +0000 UTC m=+760.855108457" lastFinishedPulling="2026-02-18 19:51:19.092268799 +0000 UTC m=+763.874388323" observedRunningTime="2026-02-18 19:51:19.837028248 +0000 UTC m=+764.619147792" watchObservedRunningTime="2026-02-18 19:51:19.847510518 +0000 UTC m=+764.629630042" Feb 18 19:51:21 crc kubenswrapper[5007]: I0218 19:51:21.844119 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" event={"ID":"a4cf3401-ae1c-41d6-9b48-abb500c6f46e","Type":"ContainerStarted","Data":"30d3a0e2b074c11df5cd84a189634a2fccde70953f675a867afd51fae29b44c9"} Feb 18 19:51:21 crc kubenswrapper[5007]: I0218 19:51:21.844534 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:21 crc kubenswrapper[5007]: I0218 19:51:21.873111 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" podStartSLOduration=2.668883122 podStartE2EDuration="6.873084699s" podCreationTimestamp="2026-02-18 19:51:15 +0000 UTC" firstStartedPulling="2026-02-18 19:51:16.731956179 +0000 UTC m=+761.514075714" lastFinishedPulling="2026-02-18 19:51:20.936157767 +0000 UTC m=+765.718277291" observedRunningTime="2026-02-18 19:51:21.866459589 +0000 UTC m=+766.648579163" watchObservedRunningTime="2026-02-18 19:51:21.873084699 +0000 UTC m=+766.655204233" Feb 18 19:51:36 crc kubenswrapper[5007]: I0218 19:51:36.127174 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b85788bf7-f7bx7" Feb 18 19:51:45 crc kubenswrapper[5007]: I0218 19:51:45.664052 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:51:45 crc kubenswrapper[5007]: I0218 19:51:45.664386 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:51:45 crc kubenswrapper[5007]: I0218 19:51:45.664480 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:51:45 crc kubenswrapper[5007]: I0218 19:51:45.665104 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10c20995e1da7316ed8d3f74629b0d4549ee655af8465574d8a6d0f66d48031f"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:51:45 crc kubenswrapper[5007]: I0218 19:51:45.665164 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://10c20995e1da7316ed8d3f74629b0d4549ee655af8465574d8a6d0f66d48031f" gracePeriod=600 Feb 18 19:51:46 crc kubenswrapper[5007]: I0218 19:51:46.005334 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="10c20995e1da7316ed8d3f74629b0d4549ee655af8465574d8a6d0f66d48031f" exitCode=0 Feb 18 19:51:46 crc kubenswrapper[5007]: I0218 19:51:46.005530 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"10c20995e1da7316ed8d3f74629b0d4549ee655af8465574d8a6d0f66d48031f"} Feb 18 19:51:46 crc kubenswrapper[5007]: I0218 19:51:46.005726 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"0761fcff0a059a787d7d4e6ecd66cfd18d646e97ccddeb6ea48208f9ee32bdee"} Feb 18 19:51:46 crc kubenswrapper[5007]: I0218 19:51:46.005756 5007 scope.go:117] "RemoveContainer" containerID="a165cc76c2e5187ea1399af2eea9deb083731acfe4acb3767cb0941dfb479641" Feb 18 19:51:55 crc kubenswrapper[5007]: I0218 19:51:55.725312 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8688dbc7bc-tt8g8" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.440749 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-k8wx9"] Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.446324 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.448188 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.448783 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-k9nhd" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.449067 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.467874 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4"] Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.468626 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.470420 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.487172 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4"] Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.545389 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2xg6k"] Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.546364 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.550913 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.550991 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kpjzn" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.551009 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.553616 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564555 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-metrics\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564598 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bgqd4\" (UID: \"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564625 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpsw\" (UniqueName: \"kubernetes.io/projected/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-kube-api-access-2vpsw\") pod \"frr-k8s-webhook-server-78b44bf5bb-bgqd4\" (UID: \"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564692 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-frr-sockets\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564756 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-bb9b7"] Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564812 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7035a97f-3800-4ead-8de5-367436d3f643-metrics-certs\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564885 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-frr-conf\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564929 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7035a97f-3800-4ead-8de5-367436d3f643-frr-startup\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.564996 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-reloader\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.565071 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdpw\" (UniqueName: \"kubernetes.io/projected/7035a97f-3800-4ead-8de5-367436d3f643-kube-api-access-pvdpw\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.565623 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.567335 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.584708 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-bb9b7"] Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666263 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvdpw\" (UniqueName: \"kubernetes.io/projected/7035a97f-3800-4ead-8de5-367436d3f643-kube-api-access-pvdpw\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666300 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-metrics\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666319 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bgqd4\" (UID: \"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666343 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-metrics-certs\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666365 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phwst\" (UniqueName: \"kubernetes.io/projected/500ed4c7-8e38-4e80-acf9-52677ec22ad9-kube-api-access-phwst\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666385 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/500ed4c7-8e38-4e80-acf9-52677ec22ad9-metallb-excludel2\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666402 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vpsw\" (UniqueName: \"kubernetes.io/projected/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-kube-api-access-2vpsw\") pod \"frr-k8s-webhook-server-78b44bf5bb-bgqd4\" (UID: \"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666449 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-frr-sockets\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666471 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7035a97f-3800-4ead-8de5-367436d3f643-metrics-certs\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666489 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-metrics-certs\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: E0218 19:51:56.666553 5007 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 18 19:51:56 crc kubenswrapper[5007]: E0218 19:51:56.666629 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-cert podName:876c34e5-89f0-4da9-8d6d-b77cdfcfebd5 nodeName:}" failed. No retries permitted until 2026-02-18 19:51:57.166608325 +0000 UTC m=+801.948727929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-cert") pod "frr-k8s-webhook-server-78b44bf5bb-bgqd4" (UID: "876c34e5-89f0-4da9-8d6d-b77cdfcfebd5") : secret "frr-k8s-webhook-server-cert" not found Feb 18 19:51:56 crc kubenswrapper[5007]: E0218 19:51:56.666665 5007 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 18 19:51:56 crc kubenswrapper[5007]: E0218 19:51:56.666705 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7035a97f-3800-4ead-8de5-367436d3f643-metrics-certs podName:7035a97f-3800-4ead-8de5-367436d3f643 nodeName:}" failed. No retries permitted until 2026-02-18 19:51:57.166691018 +0000 UTC m=+801.948810552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7035a97f-3800-4ead-8de5-367436d3f643-metrics-certs") pod "frr-k8s-k8wx9" (UID: "7035a97f-3800-4ead-8de5-367436d3f643") : secret "frr-k8s-certs-secret" not found Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666572 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-frr-conf\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666761 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666838 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7035a97f-3800-4ead-8de5-367436d3f643-frr-startup\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666924 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-reloader\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666948 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqp99\" (UniqueName: \"kubernetes.io/projected/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-kube-api-access-vqp99\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666964 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-frr-sockets\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666991 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-cert\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.666960 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-metrics\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.667035 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-frr-conf\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.667216 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7035a97f-3800-4ead-8de5-367436d3f643-reloader\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.668030 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7035a97f-3800-4ead-8de5-367436d3f643-frr-startup\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.688293 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vpsw\" (UniqueName: \"kubernetes.io/projected/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-kube-api-access-2vpsw\") pod \"frr-k8s-webhook-server-78b44bf5bb-bgqd4\" (UID: \"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.690859 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvdpw\" (UniqueName: \"kubernetes.io/projected/7035a97f-3800-4ead-8de5-367436d3f643-kube-api-access-pvdpw\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.768381 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-cert\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.769723 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-metrics-certs\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.769818 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phwst\" (UniqueName: \"kubernetes.io/projected/500ed4c7-8e38-4e80-acf9-52677ec22ad9-kube-api-access-phwst\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.769916 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/500ed4c7-8e38-4e80-acf9-52677ec22ad9-metallb-excludel2\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.770117 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-metrics-certs\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.770209 5007 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.770326 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.770467 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqp99\" (UniqueName: \"kubernetes.io/projected/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-kube-api-access-vqp99\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: E0218 19:51:56.770532 5007 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:51:56 crc kubenswrapper[5007]: E0218 19:51:56.770680 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist podName:500ed4c7-8e38-4e80-acf9-52677ec22ad9 nodeName:}" failed. No retries permitted until 2026-02-18 19:51:57.27066313 +0000 UTC m=+802.052782654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist") pod "speaker-2xg6k" (UID: "500ed4c7-8e38-4e80-acf9-52677ec22ad9") : secret "metallb-memberlist" not found Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.771251 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/500ed4c7-8e38-4e80-acf9-52677ec22ad9-metallb-excludel2\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.774959 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-metrics-certs\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.778205 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-metrics-certs\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.782400 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-cert\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.789319 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phwst\" (UniqueName: \"kubernetes.io/projected/500ed4c7-8e38-4e80-acf9-52677ec22ad9-kube-api-access-phwst\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.794337 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqp99\" (UniqueName: \"kubernetes.io/projected/38b9df5f-7f6d-46ea-aa48-002f7c9af9df-kube-api-access-vqp99\") pod \"controller-69bbfbf88f-bb9b7\" (UID: \"38b9df5f-7f6d-46ea-aa48-002f7c9af9df\") " pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:56 crc kubenswrapper[5007]: I0218 19:51:56.879216 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.181914 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bgqd4\" (UID: \"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.181981 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7035a97f-3800-4ead-8de5-367436d3f643-metrics-certs\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.185476 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7035a97f-3800-4ead-8de5-367436d3f643-metrics-certs\") pod \"frr-k8s-k8wx9\" (UID: \"7035a97f-3800-4ead-8de5-367436d3f643\") " pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.185622 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/876c34e5-89f0-4da9-8d6d-b77cdfcfebd5-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-bgqd4\" (UID: \"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.283275 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:57 crc kubenswrapper[5007]: E0218 19:51:57.283537 5007 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:51:57 crc kubenswrapper[5007]: E0218 19:51:57.283603 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist podName:500ed4c7-8e38-4e80-acf9-52677ec22ad9 nodeName:}" failed. No retries permitted until 2026-02-18 19:51:58.28358399 +0000 UTC m=+803.065703534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist") pod "speaker-2xg6k" (UID: "500ed4c7-8e38-4e80-acf9-52677ec22ad9") : secret "metallb-memberlist" not found Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.364119 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.382998 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.395283 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-bb9b7"] Feb 18 19:51:57 crc kubenswrapper[5007]: I0218 19:51:57.898250 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4"] Feb 18 19:51:57 crc kubenswrapper[5007]: W0218 19:51:57.899753 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod876c34e5_89f0_4da9_8d6d_b77cdfcfebd5.slice/crio-d0c139541226fc4b997a652edf5721ca0f7a6f9b02ea222835a0ef6ef0bf310c WatchSource:0}: Error finding container d0c139541226fc4b997a652edf5721ca0f7a6f9b02ea222835a0ef6ef0bf310c: Status 404 returned error can't find the container with id d0c139541226fc4b997a652edf5721ca0f7a6f9b02ea222835a0ef6ef0bf310c Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.123690 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-bb9b7" event={"ID":"38b9df5f-7f6d-46ea-aa48-002f7c9af9df","Type":"ContainerStarted","Data":"5a595176bfa45d906150a0252602fae4dba9d18c580b4b98b4e4175b6f415c5e"} Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.123747 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-bb9b7" event={"ID":"38b9df5f-7f6d-46ea-aa48-002f7c9af9df","Type":"ContainerStarted","Data":"98e721da82bec034e87db8501ee6e6ae6db5313e5c49ff22583d724cd450bb73"} Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.123760 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-bb9b7" event={"ID":"38b9df5f-7f6d-46ea-aa48-002f7c9af9df","Type":"ContainerStarted","Data":"6757971f182b41fb1bdaed401047500c85329cd86f2e16f2691a6e4336d5ca6f"} Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.124678 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.125970 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerStarted","Data":"95b3c641feefd3b3c44d18d7c0208b64269c1dcd4aa5065b88adfb3e19588ee4"} Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.127518 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" event={"ID":"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5","Type":"ContainerStarted","Data":"d0c139541226fc4b997a652edf5721ca0f7a6f9b02ea222835a0ef6ef0bf310c"} Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.151246 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-bb9b7" podStartSLOduration=2.151217462 podStartE2EDuration="2.151217462s" podCreationTimestamp="2026-02-18 19:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:51:58.147006132 +0000 UTC m=+802.929125726" watchObservedRunningTime="2026-02-18 19:51:58.151217462 +0000 UTC m=+802.933337026" Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.297805 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.303945 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/500ed4c7-8e38-4e80-acf9-52677ec22ad9-memberlist\") pod \"speaker-2xg6k\" (UID: \"500ed4c7-8e38-4e80-acf9-52677ec22ad9\") " pod="metallb-system/speaker-2xg6k" Feb 18 19:51:58 crc kubenswrapper[5007]: I0218 19:51:58.360076 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2xg6k" Feb 18 19:51:59 crc kubenswrapper[5007]: I0218 19:51:59.172896 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2xg6k" event={"ID":"500ed4c7-8e38-4e80-acf9-52677ec22ad9","Type":"ContainerStarted","Data":"080d59d0f37d8a184c571ba2b70b5f9598438a4c909f1d5481865db68c3755f1"} Feb 18 19:51:59 crc kubenswrapper[5007]: I0218 19:51:59.173286 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2xg6k" event={"ID":"500ed4c7-8e38-4e80-acf9-52677ec22ad9","Type":"ContainerStarted","Data":"fd5b391ea0a57fb6175440c31a11b11ac2e4d1f2b53f03d2520e2a2dce093f6f"} Feb 18 19:51:59 crc kubenswrapper[5007]: I0218 19:51:59.173298 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2xg6k" event={"ID":"500ed4c7-8e38-4e80-acf9-52677ec22ad9","Type":"ContainerStarted","Data":"6a67927129e0b77b7b03f3dd8906348c4f533baaff450c1ad31b5ed8c0f623b9"} Feb 18 19:51:59 crc kubenswrapper[5007]: I0218 19:51:59.173877 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2xg6k" Feb 18 19:51:59 crc kubenswrapper[5007]: I0218 19:51:59.215326 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2xg6k" podStartSLOduration=3.21530592 podStartE2EDuration="3.21530592s" podCreationTimestamp="2026-02-18 19:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:51:59.20832483 +0000 UTC m=+803.990444354" watchObservedRunningTime="2026-02-18 19:51:59.21530592 +0000 UTC m=+803.997425444" Feb 18 19:52:05 crc kubenswrapper[5007]: I0218 19:52:05.210968 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" event={"ID":"876c34e5-89f0-4da9-8d6d-b77cdfcfebd5","Type":"ContainerStarted","Data":"8d2c9bce626faeda632c77bb9479df8003dc2961cd45144685c77a92c7966028"} Feb 18 19:52:05 crc kubenswrapper[5007]: I0218 19:52:05.211639 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:52:05 crc kubenswrapper[5007]: I0218 19:52:05.214468 5007 generic.go:334] "Generic (PLEG): container finished" podID="7035a97f-3800-4ead-8de5-367436d3f643" containerID="4a57b5cfb0d3be3238941696d07400de86ab8a5284f2e819fbc11fb851c7b547" exitCode=0 Feb 18 19:52:05 crc kubenswrapper[5007]: I0218 19:52:05.214525 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerDied","Data":"4a57b5cfb0d3be3238941696d07400de86ab8a5284f2e819fbc11fb851c7b547"} Feb 18 19:52:05 crc kubenswrapper[5007]: I0218 19:52:05.238169 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" podStartSLOduration=2.183189027 podStartE2EDuration="9.238140143s" podCreationTimestamp="2026-02-18 19:51:56 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.901806363 +0000 UTC m=+802.683925887" lastFinishedPulling="2026-02-18 19:52:04.956757439 +0000 UTC m=+809.738877003" observedRunningTime="2026-02-18 19:52:05.229884537 +0000 UTC m=+810.012004131" watchObservedRunningTime="2026-02-18 19:52:05.238140143 +0000 UTC m=+810.020259707" Feb 18 19:52:06 crc kubenswrapper[5007]: I0218 19:52:06.229503 5007 generic.go:334] "Generic (PLEG): container finished" podID="7035a97f-3800-4ead-8de5-367436d3f643" containerID="35a86703f07640446afc612618e460611456d2264febd7f59c0ee5fa1ef3041f" exitCode=0 Feb 18 19:52:06 crc kubenswrapper[5007]: I0218 19:52:06.229642 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerDied","Data":"35a86703f07640446afc612618e460611456d2264febd7f59c0ee5fa1ef3041f"} Feb 18 19:52:07 crc kubenswrapper[5007]: I0218 19:52:07.240027 5007 generic.go:334] "Generic (PLEG): container finished" podID="7035a97f-3800-4ead-8de5-367436d3f643" containerID="77b072b1586802253de88b7e57ca59b86defad3af578cc200256fdc15ad0554c" exitCode=0 Feb 18 19:52:07 crc kubenswrapper[5007]: I0218 19:52:07.240097 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerDied","Data":"77b072b1586802253de88b7e57ca59b86defad3af578cc200256fdc15ad0554c"} Feb 18 19:52:08 crc kubenswrapper[5007]: I0218 19:52:08.252031 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerStarted","Data":"c59a63d92acdd175403452b155d704f38e43e27a7ba24ec8e2e3887711664c1d"} Feb 18 19:52:08 crc kubenswrapper[5007]: I0218 19:52:08.368262 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2xg6k" Feb 18 19:52:09 crc kubenswrapper[5007]: I0218 19:52:09.262318 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerStarted","Data":"b293b71a18449bbf97d951fc523cc129dd60a4702a3d466fd94446bdcd1bd99e"} Feb 18 19:52:09 crc kubenswrapper[5007]: I0218 19:52:09.262359 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerStarted","Data":"a2ebeb706860fa4fd15f78cf752a3c8cad2664a10c7b2215a26a6317f1ab74d5"} Feb 18 19:52:09 crc kubenswrapper[5007]: I0218 19:52:09.262370 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerStarted","Data":"033b3ff6c3f2a8b0392693155d01dc8a1fc6d279e69f90a37d9f4e0a863e124f"} Feb 18 19:52:09 crc kubenswrapper[5007]: I0218 19:52:09.262379 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerStarted","Data":"5c8a1412a01f57a4eb9652975339e5750ebe6cd841258a82c99c3d933aa53cd0"} Feb 18 19:52:09 crc kubenswrapper[5007]: I0218 19:52:09.262387 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8wx9" event={"ID":"7035a97f-3800-4ead-8de5-367436d3f643","Type":"ContainerStarted","Data":"6cbcca3718bf2ea239c0d5a49bfaeca272a3090743c0b9a4b9ffe8d1773445ce"} Feb 18 19:52:09 crc kubenswrapper[5007]: I0218 19:52:09.291455 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-k8wx9" podStartSLOduration=5.943735632 podStartE2EDuration="13.291420806s" podCreationTimestamp="2026-02-18 19:51:56 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.58790507 +0000 UTC m=+802.370024594" lastFinishedPulling="2026-02-18 19:52:04.935590204 +0000 UTC m=+809.717709768" observedRunningTime="2026-02-18 19:52:09.284172149 +0000 UTC m=+814.066291683" watchObservedRunningTime="2026-02-18 19:52:09.291420806 +0000 UTC m=+814.073540330" Feb 18 19:52:10 crc kubenswrapper[5007]: I0218 19:52:10.275082 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:52:12 crc kubenswrapper[5007]: I0218 19:52:12.365286 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:52:12 crc kubenswrapper[5007]: I0218 19:52:12.422203 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.366070 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lsnpv"] Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.367417 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lsnpv" Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.371274 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hrmxj" Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.372118 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.376399 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.391819 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lsnpv"] Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.432563 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6s9t\" (UniqueName: \"kubernetes.io/projected/228c0048-b5ba-474f-ae70-78b39ed488b3-kube-api-access-w6s9t\") pod \"openstack-operator-index-lsnpv\" (UID: \"228c0048-b5ba-474f-ae70-78b39ed488b3\") " pod="openstack-operators/openstack-operator-index-lsnpv" Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.533726 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6s9t\" (UniqueName: \"kubernetes.io/projected/228c0048-b5ba-474f-ae70-78b39ed488b3-kube-api-access-w6s9t\") pod \"openstack-operator-index-lsnpv\" (UID: \"228c0048-b5ba-474f-ae70-78b39ed488b3\") " pod="openstack-operators/openstack-operator-index-lsnpv" Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.570350 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6s9t\" (UniqueName: \"kubernetes.io/projected/228c0048-b5ba-474f-ae70-78b39ed488b3-kube-api-access-w6s9t\") pod \"openstack-operator-index-lsnpv\" (UID: \"228c0048-b5ba-474f-ae70-78b39ed488b3\") " pod="openstack-operators/openstack-operator-index-lsnpv" Feb 18 19:52:14 crc kubenswrapper[5007]: I0218 19:52:14.706928 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lsnpv" Feb 18 19:52:15 crc kubenswrapper[5007]: I0218 19:52:15.213459 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lsnpv"] Feb 18 19:52:15 crc kubenswrapper[5007]: W0218 19:52:15.216366 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228c0048_b5ba_474f_ae70_78b39ed488b3.slice/crio-d3d2aa04ae6d510f6afe59b1819b5f950fc455f767ce515c0914c7da3e7dd5e2 WatchSource:0}: Error finding container d3d2aa04ae6d510f6afe59b1819b5f950fc455f767ce515c0914c7da3e7dd5e2: Status 404 returned error can't find the container with id d3d2aa04ae6d510f6afe59b1819b5f950fc455f767ce515c0914c7da3e7dd5e2 Feb 18 19:52:15 crc kubenswrapper[5007]: I0218 19:52:15.312663 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lsnpv" event={"ID":"228c0048-b5ba-474f-ae70-78b39ed488b3","Type":"ContainerStarted","Data":"d3d2aa04ae6d510f6afe59b1819b5f950fc455f767ce515c0914c7da3e7dd5e2"} Feb 18 19:52:16 crc kubenswrapper[5007]: I0218 19:52:16.888794 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-bb9b7" Feb 18 19:52:17 crc kubenswrapper[5007]: I0218 19:52:17.367452 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-k8wx9" Feb 18 19:52:17 crc kubenswrapper[5007]: I0218 19:52:17.395118 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-bgqd4" Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.153939 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lsnpv"] Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.339457 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lsnpv" event={"ID":"228c0048-b5ba-474f-ae70-78b39ed488b3","Type":"ContainerStarted","Data":"a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009"} Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.558380 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lsnpv" podStartSLOduration=2.529880337 podStartE2EDuration="4.558340222s" podCreationTimestamp="2026-02-18 19:52:14 +0000 UTC" firstStartedPulling="2026-02-18 19:52:15.219193291 +0000 UTC m=+820.001312815" lastFinishedPulling="2026-02-18 19:52:17.247653176 +0000 UTC m=+822.029772700" observedRunningTime="2026-02-18 19:52:18.357174852 +0000 UTC m=+823.139294436" watchObservedRunningTime="2026-02-18 19:52:18.558340222 +0000 UTC m=+823.340459786" Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.568679 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rm6s2"] Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.570320 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.581873 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rm6s2"] Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.603563 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhhcp\" (UniqueName: \"kubernetes.io/projected/87050be6-ea0e-4d4d-b0bb-429ec34ea139-kube-api-access-mhhcp\") pod \"openstack-operator-index-rm6s2\" (UID: \"87050be6-ea0e-4d4d-b0bb-429ec34ea139\") " pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.705047 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhhcp\" (UniqueName: \"kubernetes.io/projected/87050be6-ea0e-4d4d-b0bb-429ec34ea139-kube-api-access-mhhcp\") pod \"openstack-operator-index-rm6s2\" (UID: \"87050be6-ea0e-4d4d-b0bb-429ec34ea139\") " pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.727716 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhhcp\" (UniqueName: \"kubernetes.io/projected/87050be6-ea0e-4d4d-b0bb-429ec34ea139-kube-api-access-mhhcp\") pod \"openstack-operator-index-rm6s2\" (UID: \"87050be6-ea0e-4d4d-b0bb-429ec34ea139\") " pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:18 crc kubenswrapper[5007]: I0218 19:52:18.941109 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:19 crc kubenswrapper[5007]: I0218 19:52:19.192734 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rm6s2"] Feb 18 19:52:19 crc kubenswrapper[5007]: I0218 19:52:19.352381 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rm6s2" event={"ID":"87050be6-ea0e-4d4d-b0bb-429ec34ea139","Type":"ContainerStarted","Data":"6d19b8444241cd6459636cd2a1aeccde411284e5c98f7b2b22eb7d29451d538b"} Feb 18 19:52:19 crc kubenswrapper[5007]: I0218 19:52:19.352521 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-lsnpv" podUID="228c0048-b5ba-474f-ae70-78b39ed488b3" containerName="registry-server" containerID="cri-o://a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009" gracePeriod=2 Feb 18 19:52:19 crc kubenswrapper[5007]: I0218 19:52:19.801108 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lsnpv" Feb 18 19:52:19 crc kubenswrapper[5007]: I0218 19:52:19.820809 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6s9t\" (UniqueName: \"kubernetes.io/projected/228c0048-b5ba-474f-ae70-78b39ed488b3-kube-api-access-w6s9t\") pod \"228c0048-b5ba-474f-ae70-78b39ed488b3\" (UID: \"228c0048-b5ba-474f-ae70-78b39ed488b3\") " Feb 18 19:52:19 crc kubenswrapper[5007]: I0218 19:52:19.828272 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228c0048-b5ba-474f-ae70-78b39ed488b3-kube-api-access-w6s9t" (OuterVolumeSpecName: "kube-api-access-w6s9t") pod "228c0048-b5ba-474f-ae70-78b39ed488b3" (UID: "228c0048-b5ba-474f-ae70-78b39ed488b3"). InnerVolumeSpecName "kube-api-access-w6s9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:19 crc kubenswrapper[5007]: I0218 19:52:19.922987 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6s9t\" (UniqueName: \"kubernetes.io/projected/228c0048-b5ba-474f-ae70-78b39ed488b3-kube-api-access-w6s9t\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.365113 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rm6s2" event={"ID":"87050be6-ea0e-4d4d-b0bb-429ec34ea139","Type":"ContainerStarted","Data":"b2aac8538ac875cc6ce1ab27038b74111168cfe25c841d762571cfd48c2ecff6"} Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.368347 5007 generic.go:334] "Generic (PLEG): container finished" podID="228c0048-b5ba-474f-ae70-78b39ed488b3" containerID="a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009" exitCode=0 Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.368475 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lsnpv" Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.368484 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lsnpv" event={"ID":"228c0048-b5ba-474f-ae70-78b39ed488b3","Type":"ContainerDied","Data":"a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009"} Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.368721 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lsnpv" event={"ID":"228c0048-b5ba-474f-ae70-78b39ed488b3","Type":"ContainerDied","Data":"d3d2aa04ae6d510f6afe59b1819b5f950fc455f767ce515c0914c7da3e7dd5e2"} Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.368764 5007 scope.go:117] "RemoveContainer" containerID="a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009" Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.397517 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rm6s2" podStartSLOduration=2.348461633 podStartE2EDuration="2.397478144s" podCreationTimestamp="2026-02-18 19:52:18 +0000 UTC" firstStartedPulling="2026-02-18 19:52:19.201708472 +0000 UTC m=+823.983828016" lastFinishedPulling="2026-02-18 19:52:19.250724993 +0000 UTC m=+824.032844527" observedRunningTime="2026-02-18 19:52:20.390229917 +0000 UTC m=+825.172349521" watchObservedRunningTime="2026-02-18 19:52:20.397478144 +0000 UTC m=+825.179597718" Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.399202 5007 scope.go:117] "RemoveContainer" containerID="a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009" Feb 18 19:52:20 crc kubenswrapper[5007]: E0218 19:52:20.406658 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009\": container with ID starting with a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009 not found: ID does not exist" containerID="a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009" Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.406708 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009"} err="failed to get container status \"a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009\": rpc error: code = NotFound desc = could not find container \"a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009\": container with ID starting with a8957ae7728a5ccfa92f22c80ddc266162b357f397aef5a3a225a90626184009 not found: ID does not exist" Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.417498 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lsnpv"] Feb 18 19:52:20 crc kubenswrapper[5007]: I0218 19:52:20.424050 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-lsnpv"] Feb 18 19:52:21 crc kubenswrapper[5007]: I0218 19:52:21.925385 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228c0048-b5ba-474f-ae70-78b39ed488b3" path="/var/lib/kubelet/pods/228c0048-b5ba-474f-ae70-78b39ed488b3/volumes" Feb 18 19:52:28 crc kubenswrapper[5007]: I0218 19:52:28.941573 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:28 crc kubenswrapper[5007]: I0218 19:52:28.942372 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:28 crc kubenswrapper[5007]: I0218 19:52:28.974006 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:29 crc kubenswrapper[5007]: I0218 19:52:29.493054 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rm6s2" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.617023 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk"] Feb 18 19:52:30 crc kubenswrapper[5007]: E0218 19:52:30.618175 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228c0048-b5ba-474f-ae70-78b39ed488b3" containerName="registry-server" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.618204 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="228c0048-b5ba-474f-ae70-78b39ed488b3" containerName="registry-server" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.618744 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="228c0048-b5ba-474f-ae70-78b39ed488b3" containerName="registry-server" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.625567 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.639762 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f99f9" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.669388 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk"] Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.685819 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z58w6\" (UniqueName: \"kubernetes.io/projected/a8451bce-bcdc-4025-ab1a-9474370807f7-kube-api-access-z58w6\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.685916 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-bundle\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.685942 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-util\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.787093 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-bundle\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.787515 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-util\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.787744 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z58w6\" (UniqueName: \"kubernetes.io/projected/a8451bce-bcdc-4025-ab1a-9474370807f7-kube-api-access-z58w6\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.787953 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-bundle\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.788035 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-util\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.813010 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z58w6\" (UniqueName: \"kubernetes.io/projected/a8451bce-bcdc-4025-ab1a-9474370807f7-kube-api-access-z58w6\") pod \"b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:30 crc kubenswrapper[5007]: I0218 19:52:30.970835 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:31 crc kubenswrapper[5007]: I0218 19:52:31.481621 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk"] Feb 18 19:52:32 crc kubenswrapper[5007]: I0218 19:52:32.475599 5007 generic.go:334] "Generic (PLEG): container finished" podID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerID="e23c8b3adf5d77d35999591453edeef4b4121d8adb8a9cb4f70ca739c07f2058" exitCode=0 Feb 18 19:52:32 crc kubenswrapper[5007]: I0218 19:52:32.475659 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" event={"ID":"a8451bce-bcdc-4025-ab1a-9474370807f7","Type":"ContainerDied","Data":"e23c8b3adf5d77d35999591453edeef4b4121d8adb8a9cb4f70ca739c07f2058"} Feb 18 19:52:32 crc kubenswrapper[5007]: I0218 19:52:32.475804 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" event={"ID":"a8451bce-bcdc-4025-ab1a-9474370807f7","Type":"ContainerStarted","Data":"c1cae8c169e31c3b5b4cd3628401fff6248358c2d3932e7b9d6119e2b81851c8"} Feb 18 19:52:33 crc kubenswrapper[5007]: I0218 19:52:33.487753 5007 generic.go:334] "Generic (PLEG): container finished" podID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerID="9e56a74c3e18f79f08de828d9278a5832c6966413047a05bc64250e787b591e3" exitCode=0 Feb 18 19:52:33 crc kubenswrapper[5007]: I0218 19:52:33.487843 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" event={"ID":"a8451bce-bcdc-4025-ab1a-9474370807f7","Type":"ContainerDied","Data":"9e56a74c3e18f79f08de828d9278a5832c6966413047a05bc64250e787b591e3"} Feb 18 19:52:34 crc kubenswrapper[5007]: I0218 19:52:34.499649 5007 generic.go:334] "Generic (PLEG): container finished" podID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerID="af02a87033e4082ac9cab2d3a95e42064b64a9c091cb509166f89c58297d7607" exitCode=0 Feb 18 19:52:34 crc kubenswrapper[5007]: I0218 19:52:34.499761 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" event={"ID":"a8451bce-bcdc-4025-ab1a-9474370807f7","Type":"ContainerDied","Data":"af02a87033e4082ac9cab2d3a95e42064b64a9c091cb509166f89c58297d7607"} Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.778728 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.896907 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-util\") pod \"a8451bce-bcdc-4025-ab1a-9474370807f7\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.897051 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z58w6\" (UniqueName: \"kubernetes.io/projected/a8451bce-bcdc-4025-ab1a-9474370807f7-kube-api-access-z58w6\") pod \"a8451bce-bcdc-4025-ab1a-9474370807f7\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.897098 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-bundle\") pod \"a8451bce-bcdc-4025-ab1a-9474370807f7\" (UID: \"a8451bce-bcdc-4025-ab1a-9474370807f7\") " Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.897876 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-bundle" (OuterVolumeSpecName: "bundle") pod "a8451bce-bcdc-4025-ab1a-9474370807f7" (UID: "a8451bce-bcdc-4025-ab1a-9474370807f7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.905982 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8451bce-bcdc-4025-ab1a-9474370807f7-kube-api-access-z58w6" (OuterVolumeSpecName: "kube-api-access-z58w6") pod "a8451bce-bcdc-4025-ab1a-9474370807f7" (UID: "a8451bce-bcdc-4025-ab1a-9474370807f7"). InnerVolumeSpecName "kube-api-access-z58w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.912696 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-util" (OuterVolumeSpecName: "util") pod "a8451bce-bcdc-4025-ab1a-9474370807f7" (UID: "a8451bce-bcdc-4025-ab1a-9474370807f7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.999266 5007 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.999317 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z58w6\" (UniqueName: \"kubernetes.io/projected/a8451bce-bcdc-4025-ab1a-9474370807f7-kube-api-access-z58w6\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:35 crc kubenswrapper[5007]: I0218 19:52:35.999342 5007 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8451bce-bcdc-4025-ab1a-9474370807f7-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:36 crc kubenswrapper[5007]: I0218 19:52:36.517202 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" event={"ID":"a8451bce-bcdc-4025-ab1a-9474370807f7","Type":"ContainerDied","Data":"c1cae8c169e31c3b5b4cd3628401fff6248358c2d3932e7b9d6119e2b81851c8"} Feb 18 19:52:36 crc kubenswrapper[5007]: I0218 19:52:36.517265 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cae8c169e31c3b5b4cd3628401fff6248358c2d3932e7b9d6119e2b81851c8" Feb 18 19:52:36 crc kubenswrapper[5007]: I0218 19:52:36.517290 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6cb3333d0d5bb201d260e15a6324e1ac92930a9479b1fe99c70309336ltcrk" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.296521 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k"] Feb 18 19:52:43 crc kubenswrapper[5007]: E0218 19:52:43.297451 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerName="pull" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.297468 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerName="pull" Feb 18 19:52:43 crc kubenswrapper[5007]: E0218 19:52:43.297496 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerName="util" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.297503 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerName="util" Feb 18 19:52:43 crc kubenswrapper[5007]: E0218 19:52:43.297515 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerName="extract" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.297525 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerName="extract" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.297736 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8451bce-bcdc-4025-ab1a-9474370807f7" containerName="extract" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.298401 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.300757 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zzrxh" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.307110 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47tgw\" (UniqueName: \"kubernetes.io/projected/0d1ed214-da70-4c83-bc4f-21c5590a3305-kube-api-access-47tgw\") pod \"openstack-operator-controller-init-f746c56cb-q7k5k\" (UID: \"0d1ed214-da70-4c83-bc4f-21c5590a3305\") " pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.329114 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k"] Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.408804 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47tgw\" (UniqueName: \"kubernetes.io/projected/0d1ed214-da70-4c83-bc4f-21c5590a3305-kube-api-access-47tgw\") pod \"openstack-operator-controller-init-f746c56cb-q7k5k\" (UID: \"0d1ed214-da70-4c83-bc4f-21c5590a3305\") " pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.432658 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47tgw\" (UniqueName: \"kubernetes.io/projected/0d1ed214-da70-4c83-bc4f-21c5590a3305-kube-api-access-47tgw\") pod \"openstack-operator-controller-init-f746c56cb-q7k5k\" (UID: \"0d1ed214-da70-4c83-bc4f-21c5590a3305\") " pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.625292 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" Feb 18 19:52:43 crc kubenswrapper[5007]: I0218 19:52:43.873467 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k"] Feb 18 19:52:44 crc kubenswrapper[5007]: I0218 19:52:44.578971 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" event={"ID":"0d1ed214-da70-4c83-bc4f-21c5590a3305","Type":"ContainerStarted","Data":"2e8d9fe4249f0e9f805c623360c0d7a1d99102b1981f72e89dd38bbc432c5a2b"} Feb 18 19:52:48 crc kubenswrapper[5007]: I0218 19:52:48.612422 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" event={"ID":"0d1ed214-da70-4c83-bc4f-21c5590a3305","Type":"ContainerStarted","Data":"1d9f29ef01b22c4ec33b5b020d1bef055839cbf1f588510b2c3af6ad3cb13a75"} Feb 18 19:52:48 crc kubenswrapper[5007]: I0218 19:52:48.612912 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" Feb 18 19:52:48 crc kubenswrapper[5007]: I0218 19:52:48.644915 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" podStartSLOduration=1.8119430250000002 podStartE2EDuration="5.644887478s" podCreationTimestamp="2026-02-18 19:52:43 +0000 UTC" firstStartedPulling="2026-02-18 19:52:43.877891609 +0000 UTC m=+848.660011163" lastFinishedPulling="2026-02-18 19:52:47.710836092 +0000 UTC m=+852.492955616" observedRunningTime="2026-02-18 19:52:48.642098689 +0000 UTC m=+853.424218223" watchObservedRunningTime="2026-02-18 19:52:48.644887478 +0000 UTC m=+853.427007042" Feb 18 19:52:53 crc kubenswrapper[5007]: I0218 19:52:53.628195 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f746c56cb-q7k5k" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.260379 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.262171 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.264678 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kc6fj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.272379 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.276769 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.278813 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-pbmpm" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.282597 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.286775 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.294516 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh4rz\" (UniqueName: \"kubernetes.io/projected/2f42c84b-2cc4-45c6-a9cb-0da6e2b4519e-kube-api-access-lh4rz\") pod \"barbican-operator-controller-manager-868647ff47-g8hqj\" (UID: \"2f42c84b-2cc4-45c6-a9cb-0da6e2b4519e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.294583 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whn74\" (UniqueName: \"kubernetes.io/projected/b4617470-b330-4c8a-bcc0-018bb10bc883-kube-api-access-whn74\") pod \"cinder-operator-controller-manager-5d946d989d-mlkjz\" (UID: \"b4617470-b330-4c8a-bcc0-018bb10bc883\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.294944 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.313888 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.313988 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.315792 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-4skgk" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.324668 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.325543 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.329126 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-thf8k" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.347923 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.349099 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.354155 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4gqv6" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.383469 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.397754 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8nmf\" (UniqueName: \"kubernetes.io/projected/951189e9-dedd-4c70-9608-a06110b98f8d-kube-api-access-k8nmf\") pod \"heat-operator-controller-manager-69f49c598c-fjdj8\" (UID: \"951189e9-dedd-4c70-9608-a06110b98f8d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.397817 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vwb\" (UniqueName: \"kubernetes.io/projected/e077b63c-904f-48a4-9363-a40876135623-kube-api-access-t9vwb\") pod \"glance-operator-controller-manager-77987464f4-m7x7h\" (UID: \"e077b63c-904f-48a4-9363-a40876135623\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.397849 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh4rz\" (UniqueName: \"kubernetes.io/projected/2f42c84b-2cc4-45c6-a9cb-0da6e2b4519e-kube-api-access-lh4rz\") pod \"barbican-operator-controller-manager-868647ff47-g8hqj\" (UID: \"2f42c84b-2cc4-45c6-a9cb-0da6e2b4519e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.397882 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skjn2\" (UniqueName: \"kubernetes.io/projected/280b4324-5c76-4574-bcc3-3f5d271efd2d-kube-api-access-skjn2\") pod \"designate-operator-controller-manager-6d8bf5c495-lmg2v\" (UID: \"280b4324-5c76-4574-bcc3-3f5d271efd2d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.397937 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whn74\" (UniqueName: \"kubernetes.io/projected/b4617470-b330-4c8a-bcc0-018bb10bc883-kube-api-access-whn74\") pod \"cinder-operator-controller-manager-5d946d989d-mlkjz\" (UID: \"b4617470-b330-4c8a-bcc0-018bb10bc883\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.411144 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.449360 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whn74\" (UniqueName: \"kubernetes.io/projected/b4617470-b330-4c8a-bcc0-018bb10bc883-kube-api-access-whn74\") pod \"cinder-operator-controller-manager-5d946d989d-mlkjz\" (UID: \"b4617470-b330-4c8a-bcc0-018bb10bc883\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.464859 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh4rz\" (UniqueName: \"kubernetes.io/projected/2f42c84b-2cc4-45c6-a9cb-0da6e2b4519e-kube-api-access-lh4rz\") pod \"barbican-operator-controller-manager-868647ff47-g8hqj\" (UID: \"2f42c84b-2cc4-45c6-a9cb-0da6e2b4519e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.493213 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.494226 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.501903 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8nmf\" (UniqueName: \"kubernetes.io/projected/951189e9-dedd-4c70-9608-a06110b98f8d-kube-api-access-k8nmf\") pod \"heat-operator-controller-manager-69f49c598c-fjdj8\" (UID: \"951189e9-dedd-4c70-9608-a06110b98f8d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.501960 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9vwb\" (UniqueName: \"kubernetes.io/projected/e077b63c-904f-48a4-9363-a40876135623-kube-api-access-t9vwb\") pod \"glance-operator-controller-manager-77987464f4-m7x7h\" (UID: \"e077b63c-904f-48a4-9363-a40876135623\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.501991 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skjn2\" (UniqueName: \"kubernetes.io/projected/280b4324-5c76-4574-bcc3-3f5d271efd2d-kube-api-access-skjn2\") pod \"designate-operator-controller-manager-6d8bf5c495-lmg2v\" (UID: \"280b4324-5c76-4574-bcc3-3f5d271efd2d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.508169 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2rnpb" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.529915 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.533646 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9vwb\" (UniqueName: \"kubernetes.io/projected/e077b63c-904f-48a4-9363-a40876135623-kube-api-access-t9vwb\") pod \"glance-operator-controller-manager-77987464f4-m7x7h\" (UID: \"e077b63c-904f-48a4-9363-a40876135623\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.537177 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skjn2\" (UniqueName: \"kubernetes.io/projected/280b4324-5c76-4574-bcc3-3f5d271efd2d-kube-api-access-skjn2\") pod \"designate-operator-controller-manager-6d8bf5c495-lmg2v\" (UID: \"280b4324-5c76-4574-bcc3-3f5d271efd2d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.542395 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.543382 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.548775 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8wbc2" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.548967 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.554061 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8nmf\" (UniqueName: \"kubernetes.io/projected/951189e9-dedd-4c70-9608-a06110b98f8d-kube-api-access-k8nmf\") pod \"heat-operator-controller-manager-69f49c598c-fjdj8\" (UID: \"951189e9-dedd-4c70-9608-a06110b98f8d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.559607 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.560620 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.563351 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vfc2z" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.581777 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.588989 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.598796 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.611136 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vbf\" (UniqueName: \"kubernetes.io/projected/65f35275-3e75-45d1-b4db-865b02e3c061-kube-api-access-m8vbf\") pod \"horizon-operator-controller-manager-5b9b8895d5-ss4hw\" (UID: \"65f35275-3e75-45d1-b4db-865b02e3c061\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.611662 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.612605 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.617137 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-h28w4" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.640798 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.654145 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.655790 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.701505 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-jt656"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.702808 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.712343 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vbf\" (UniqueName: \"kubernetes.io/projected/65f35275-3e75-45d1-b4db-865b02e3c061-kube-api-access-m8vbf\") pod \"horizon-operator-controller-manager-5b9b8895d5-ss4hw\" (UID: \"65f35275-3e75-45d1-b4db-865b02e3c061\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.712433 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b27c\" (UniqueName: \"kubernetes.io/projected/ebb4c943-3dac-4b30-805c-f40a96d848b6-kube-api-access-6b27c\") pod \"keystone-operator-controller-manager-b4d948c87-ckbhj\" (UID: \"ebb4c943-3dac-4b30-805c-f40a96d848b6\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.712502 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqbg\" (UniqueName: \"kubernetes.io/projected/2ce1afe9-a5be-4d33-8755-5743475cdcff-kube-api-access-xlqbg\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.712539 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmcz\" (UniqueName: \"kubernetes.io/projected/741852ae-b134-4f2f-81f5-358eeb0bcca4-kube-api-access-blmcz\") pod \"ironic-operator-controller-manager-554564d7fc-lhcvb\" (UID: \"741852ae-b134-4f2f-81f5-358eeb0bcca4\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.712571 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.716106 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nbqtt" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.719315 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.744902 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.757730 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vbf\" (UniqueName: \"kubernetes.io/projected/65f35275-3e75-45d1-b4db-865b02e3c061-kube-api-access-m8vbf\") pod \"horizon-operator-controller-manager-5b9b8895d5-ss4hw\" (UID: \"65f35275-3e75-45d1-b4db-865b02e3c061\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.791504 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.792876 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.807667 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nnct5" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.822399 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.823506 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlqbg\" (UniqueName: \"kubernetes.io/projected/2ce1afe9-a5be-4d33-8755-5743475cdcff-kube-api-access-xlqbg\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.823561 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmcz\" (UniqueName: \"kubernetes.io/projected/741852ae-b134-4f2f-81f5-358eeb0bcca4-kube-api-access-blmcz\") pod \"ironic-operator-controller-manager-554564d7fc-lhcvb\" (UID: \"741852ae-b134-4f2f-81f5-358eeb0bcca4\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.823607 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.823670 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8xqw\" (UniqueName: \"kubernetes.io/projected/75eaa04a-0f88-4307-be6e-fe8d742145a9-kube-api-access-t8xqw\") pod \"manila-operator-controller-manager-54f6768c69-jt656\" (UID: \"75eaa04a-0f88-4307-be6e-fe8d742145a9\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.823695 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b27c\" (UniqueName: \"kubernetes.io/projected/ebb4c943-3dac-4b30-805c-f40a96d848b6-kube-api-access-6b27c\") pod \"keystone-operator-controller-manager-b4d948c87-ckbhj\" (UID: \"ebb4c943-3dac-4b30-805c-f40a96d848b6\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" Feb 18 19:53:14 crc kubenswrapper[5007]: E0218 19:53:14.823860 5007 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:14 crc kubenswrapper[5007]: E0218 19:53:14.823920 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert podName:2ce1afe9-a5be-4d33-8755-5743475cdcff nodeName:}" failed. No retries permitted until 2026-02-18 19:53:15.323902085 +0000 UTC m=+880.106021609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert") pod "infra-operator-controller-manager-79d975b745-l4tvp" (UID: "2ce1afe9-a5be-4d33-8755-5743475cdcff") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.831023 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.880649 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmcz\" (UniqueName: \"kubernetes.io/projected/741852ae-b134-4f2f-81f5-358eeb0bcca4-kube-api-access-blmcz\") pod \"ironic-operator-controller-manager-554564d7fc-lhcvb\" (UID: \"741852ae-b134-4f2f-81f5-358eeb0bcca4\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.889176 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b27c\" (UniqueName: \"kubernetes.io/projected/ebb4c943-3dac-4b30-805c-f40a96d848b6-kube-api-access-6b27c\") pod \"keystone-operator-controller-manager-b4d948c87-ckbhj\" (UID: \"ebb4c943-3dac-4b30-805c-f40a96d848b6\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.889413 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.890276 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.895887 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4rg8b" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.901070 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlqbg\" (UniqueName: \"kubernetes.io/projected/2ce1afe9-a5be-4d33-8755-5743475cdcff-kube-api-access-xlqbg\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.914065 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-jt656"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.929947 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8btp\" (UniqueName: \"kubernetes.io/projected/6dcfa1b8-c3ec-45ce-9aba-0089081afae3-kube-api-access-v8btp\") pod \"neutron-operator-controller-manager-64ddbf8bb-8clwq\" (UID: \"6dcfa1b8-c3ec-45ce-9aba-0089081afae3\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.930112 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8xqw\" (UniqueName: \"kubernetes.io/projected/75eaa04a-0f88-4307-be6e-fe8d742145a9-kube-api-access-t8xqw\") pod \"manila-operator-controller-manager-54f6768c69-jt656\" (UID: \"75eaa04a-0f88-4307-be6e-fe8d742145a9\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.930148 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jwhr\" (UniqueName: \"kubernetes.io/projected/1a0be001-6181-4af3-9656-e48611b7c93c-kube-api-access-4jwhr\") pod \"mariadb-operator-controller-manager-6994f66f48-49vtz\" (UID: \"1a0be001-6181-4af3-9656-e48611b7c93c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.966204 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq"] Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.967193 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" Feb 18 19:53:14 crc kubenswrapper[5007]: I0218 19:53:14.975838 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bpc4h" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.002937 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8xqw\" (UniqueName: \"kubernetes.io/projected/75eaa04a-0f88-4307-be6e-fe8d742145a9-kube-api-access-t8xqw\") pod \"manila-operator-controller-manager-54f6768c69-jt656\" (UID: \"75eaa04a-0f88-4307-be6e-fe8d742145a9\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.031332 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqr6v\" (UniqueName: \"kubernetes.io/projected/8996f42c-6dec-44f2-9664-04debce79ae0-kube-api-access-hqr6v\") pod \"nova-operator-controller-manager-567668f5cf-p28vq\" (UID: \"8996f42c-6dec-44f2-9664-04debce79ae0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.031394 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jwhr\" (UniqueName: \"kubernetes.io/projected/1a0be001-6181-4af3-9656-e48611b7c93c-kube-api-access-4jwhr\") pod \"mariadb-operator-controller-manager-6994f66f48-49vtz\" (UID: \"1a0be001-6181-4af3-9656-e48611b7c93c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.031417 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8btp\" (UniqueName: \"kubernetes.io/projected/6dcfa1b8-c3ec-45ce-9aba-0089081afae3-kube-api-access-v8btp\") pod \"neutron-operator-controller-manager-64ddbf8bb-8clwq\" (UID: \"6dcfa1b8-c3ec-45ce-9aba-0089081afae3\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.034838 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.035703 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.041796 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lhhrp" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.042508 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.067576 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.071206 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8btp\" (UniqueName: \"kubernetes.io/projected/6dcfa1b8-c3ec-45ce-9aba-0089081afae3-kube-api-access-v8btp\") pod \"neutron-operator-controller-manager-64ddbf8bb-8clwq\" (UID: \"6dcfa1b8-c3ec-45ce-9aba-0089081afae3\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.077047 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.082986 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.084275 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.088923 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.094772 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wsc6z" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.094813 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jwhr\" (UniqueName: \"kubernetes.io/projected/1a0be001-6181-4af3-9656-e48611b7c93c-kube-api-access-4jwhr\") pod \"mariadb-operator-controller-manager-6994f66f48-49vtz\" (UID: \"1a0be001-6181-4af3-9656-e48611b7c93c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.094945 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.105601 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.114522 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.115551 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.122030 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-l2j65" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.128537 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.129962 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.133739 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggzsd\" (UniqueName: \"kubernetes.io/projected/bb8aa80a-f91c-40cd-b007-1299afdbb054-kube-api-access-ggzsd\") pod \"octavia-operator-controller-manager-69f8888797-pcsjz\" (UID: \"bb8aa80a-f91c-40cd-b007-1299afdbb054\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.133857 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqr6v\" (UniqueName: \"kubernetes.io/projected/8996f42c-6dec-44f2-9664-04debce79ae0-kube-api-access-hqr6v\") pod \"nova-operator-controller-manager-567668f5cf-p28vq\" (UID: \"8996f42c-6dec-44f2-9664-04debce79ae0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.133915 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xpc\" (UniqueName: \"kubernetes.io/projected/56389769-2657-410c-86dc-3dccb7de3836-kube-api-access-44xpc\") pod \"ovn-operator-controller-manager-d44cf6b75-5fsf6\" (UID: \"56389769-2657-410c-86dc-3dccb7de3836\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.133948 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.134066 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99dm\" (UniqueName: \"kubernetes.io/projected/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-kube-api-access-j99dm\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.136633 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.138074 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.200871 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pc42v"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.201477 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.202265 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.210545 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-btwfl" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.222614 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mmzgh" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.233210 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqr6v\" (UniqueName: \"kubernetes.io/projected/8996f42c-6dec-44f2-9664-04debce79ae0-kube-api-access-hqr6v\") pod \"nova-operator-controller-manager-567668f5cf-p28vq\" (UID: \"8996f42c-6dec-44f2-9664-04debce79ae0\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.236894 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99dm\" (UniqueName: \"kubernetes.io/projected/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-kube-api-access-j99dm\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.236973 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggzsd\" (UniqueName: \"kubernetes.io/projected/bb8aa80a-f91c-40cd-b007-1299afdbb054-kube-api-access-ggzsd\") pod \"octavia-operator-controller-manager-69f8888797-pcsjz\" (UID: \"bb8aa80a-f91c-40cd-b007-1299afdbb054\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.237014 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44xpc\" (UniqueName: \"kubernetes.io/projected/56389769-2657-410c-86dc-3dccb7de3836-kube-api-access-44xpc\") pod \"ovn-operator-controller-manager-d44cf6b75-5fsf6\" (UID: \"56389769-2657-410c-86dc-3dccb7de3836\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.237036 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.237232 5007 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.237296 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert podName:1d0172b2-fe1c-4265-a3cb-1c2f267223a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:15.737275428 +0000 UTC m=+880.519394952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" (UID: "1d0172b2-fe1c-4265-a3cb-1c2f267223a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.271340 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.292244 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99dm\" (UniqueName: \"kubernetes.io/projected/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-kube-api-access-j99dm\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.292574 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggzsd\" (UniqueName: \"kubernetes.io/projected/bb8aa80a-f91c-40cd-b007-1299afdbb054-kube-api-access-ggzsd\") pod \"octavia-operator-controller-manager-69f8888797-pcsjz\" (UID: \"bb8aa80a-f91c-40cd-b007-1299afdbb054\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.293797 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44xpc\" (UniqueName: \"kubernetes.io/projected/56389769-2657-410c-86dc-3dccb7de3836-kube-api-access-44xpc\") pod \"ovn-operator-controller-manager-d44cf6b75-5fsf6\" (UID: \"56389769-2657-410c-86dc-3dccb7de3836\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.322523 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.329835 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.335627 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.342590 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pc42v"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.345058 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.345197 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jz58\" (UniqueName: \"kubernetes.io/projected/06ec4490-9542-41e8-bcab-6b3cab6107fd-kube-api-access-5jz58\") pod \"swift-operator-controller-manager-68f46476f-pc42v\" (UID: \"06ec4490-9542-41e8-bcab-6b3cab6107fd\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.345276 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ddg9\" (UniqueName: \"kubernetes.io/projected/3d3f1240-ac95-4de0-97ca-94f10ae93a38-kube-api-access-2ddg9\") pod \"placement-operator-controller-manager-8497b45c89-sfsc4\" (UID: \"3d3f1240-ac95-4de0-97ca-94f10ae93a38\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.345500 5007 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.345579 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert podName:2ce1afe9-a5be-4d33-8755-5743475cdcff nodeName:}" failed. No retries permitted until 2026-02-18 19:53:16.345540085 +0000 UTC m=+881.127659609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert") pod "infra-operator-controller-manager-79d975b745-l4tvp" (UID: "2ce1afe9-a5be-4d33-8755-5743475cdcff") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.352176 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.354916 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.355145 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.374885 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-lhhps"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.376105 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.376882 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9ssml" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.381530 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-lhhps"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.382969 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-622x4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.395937 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.439613 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.443195 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.447987 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9769x" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.450157 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.451010 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jz58\" (UniqueName: \"kubernetes.io/projected/06ec4490-9542-41e8-bcab-6b3cab6107fd-kube-api-access-5jz58\") pod \"swift-operator-controller-manager-68f46476f-pc42v\" (UID: \"06ec4490-9542-41e8-bcab-6b3cab6107fd\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.451040 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ddg9\" (UniqueName: \"kubernetes.io/projected/3d3f1240-ac95-4de0-97ca-94f10ae93a38-kube-api-access-2ddg9\") pod \"placement-operator-controller-manager-8497b45c89-sfsc4\" (UID: \"3d3f1240-ac95-4de0-97ca-94f10ae93a38\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.472611 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.506381 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ddg9\" (UniqueName: \"kubernetes.io/projected/3d3f1240-ac95-4de0-97ca-94f10ae93a38-kube-api-access-2ddg9\") pod \"placement-operator-controller-manager-8497b45c89-sfsc4\" (UID: \"3d3f1240-ac95-4de0-97ca-94f10ae93a38\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.509584 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jz58\" (UniqueName: \"kubernetes.io/projected/06ec4490-9542-41e8-bcab-6b3cab6107fd-kube-api-access-5jz58\") pod \"swift-operator-controller-manager-68f46476f-pc42v\" (UID: \"06ec4490-9542-41e8-bcab-6b3cab6107fd\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.522580 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.523528 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.525929 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.526293 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gqwt7" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.526142 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.539349 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.555601 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.556524 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.559726 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-whhxt" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.560661 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.560741 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5lr7\" (UniqueName: \"kubernetes.io/projected/574dc561-0a0f-4118-b01f-954a238ba12c-kube-api-access-g5lr7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-pwdj9\" (UID: \"574dc561-0a0f-4118-b01f-954a238ba12c\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.560781 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2gwg\" (UniqueName: \"kubernetes.io/projected/39d2439e-f2db-43ba-b3af-d6642b9d5d22-kube-api-access-m2gwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tlvn4\" (UID: \"39d2439e-f2db-43ba-b3af-d6642b9d5d22\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.560813 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4wx\" (UniqueName: \"kubernetes.io/projected/d494444c-827a-44ff-9e45-c1097f39b8d6-kube-api-access-4x4wx\") pod \"test-operator-controller-manager-7866795846-lhhps\" (UID: \"d494444c-827a-44ff-9e45-c1097f39b8d6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.560857 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.560943 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmrdx\" (UniqueName: \"kubernetes.io/projected/29215188-681f-4991-9254-0ec04b455943-kube-api-access-zmrdx\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.561023 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwsqd\" (UniqueName: \"kubernetes.io/projected/39cb540d-ddc7-4d2f-a63b-2dd0c400ac01-kube-api-access-jwsqd\") pod \"watcher-operator-controller-manager-7f6db7f7d8-d772r\" (UID: \"39cb540d-ddc7-4d2f-a63b-2dd0c400ac01\") " pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.606521 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.662748 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.662800 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmrdx\" (UniqueName: \"kubernetes.io/projected/29215188-681f-4991-9254-0ec04b455943-kube-api-access-zmrdx\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.662835 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwsqd\" (UniqueName: \"kubernetes.io/projected/39cb540d-ddc7-4d2f-a63b-2dd0c400ac01-kube-api-access-jwsqd\") pod \"watcher-operator-controller-manager-7f6db7f7d8-d772r\" (UID: \"39cb540d-ddc7-4d2f-a63b-2dd0c400ac01\") " pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.662870 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.662915 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5lr7\" (UniqueName: \"kubernetes.io/projected/574dc561-0a0f-4118-b01f-954a238ba12c-kube-api-access-g5lr7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-pwdj9\" (UID: \"574dc561-0a0f-4118-b01f-954a238ba12c\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.662942 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2gwg\" (UniqueName: \"kubernetes.io/projected/39d2439e-f2db-43ba-b3af-d6642b9d5d22-kube-api-access-m2gwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tlvn4\" (UID: \"39d2439e-f2db-43ba-b3af-d6642b9d5d22\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.662967 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4wx\" (UniqueName: \"kubernetes.io/projected/d494444c-827a-44ff-9e45-c1097f39b8d6-kube-api-access-4x4wx\") pod \"test-operator-controller-manager-7866795846-lhhps\" (UID: \"d494444c-827a-44ff-9e45-c1097f39b8d6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.663134 5007 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.663177 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:16.163162974 +0000 UTC m=+880.945282498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "webhook-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.663237 5007 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.663272 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:16.163263487 +0000 UTC m=+880.945383011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "metrics-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.695814 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2gwg\" (UniqueName: \"kubernetes.io/projected/39d2439e-f2db-43ba-b3af-d6642b9d5d22-kube-api-access-m2gwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tlvn4\" (UID: \"39d2439e-f2db-43ba-b3af-d6642b9d5d22\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.696380 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj"] Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.699005 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.699493 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.708022 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4wx\" (UniqueName: \"kubernetes.io/projected/d494444c-827a-44ff-9e45-c1097f39b8d6-kube-api-access-4x4wx\") pod \"test-operator-controller-manager-7866795846-lhhps\" (UID: \"d494444c-827a-44ff-9e45-c1097f39b8d6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.717549 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwsqd\" (UniqueName: \"kubernetes.io/projected/39cb540d-ddc7-4d2f-a63b-2dd0c400ac01-kube-api-access-jwsqd\") pod \"watcher-operator-controller-manager-7f6db7f7d8-d772r\" (UID: \"39cb540d-ddc7-4d2f-a63b-2dd0c400ac01\") " pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.720810 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5lr7\" (UniqueName: \"kubernetes.io/projected/574dc561-0a0f-4118-b01f-954a238ba12c-kube-api-access-g5lr7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-pwdj9\" (UID: \"574dc561-0a0f-4118-b01f-954a238ba12c\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.729242 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmrdx\" (UniqueName: \"kubernetes.io/projected/29215188-681f-4991-9254-0ec04b455943-kube-api-access-zmrdx\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.763960 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.764111 5007 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: E0218 19:53:15.764160 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert podName:1d0172b2-fe1c-4265-a3cb-1c2f267223a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:16.764144106 +0000 UTC m=+881.546263630 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" (UID: "1d0172b2-fe1c-4265-a3cb-1c2f267223a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.776534 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.875894 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.887672 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" event={"ID":"2f42c84b-2cc4-45c6-a9cb-0da6e2b4519e","Type":"ContainerStarted","Data":"94b8f6af74889d972baf5c4bfdaa0391b6ca064241668559f4cabef062cde7b1"} Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.933111 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" Feb 18 19:53:15 crc kubenswrapper[5007]: I0218 19:53:15.977580 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.001053 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.003205 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.018902 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.136663 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.147407 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.157093 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj"] Feb 18 19:53:16 crc kubenswrapper[5007]: W0218 19:53:16.171722 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb4c943_3dac_4b30_805c_f40a96d848b6.slice/crio-dc4f577e7d9f97a3bc101bd338d33f88ddf8ccff4093b1ba1f0138aa2a8b3ca4 WatchSource:0}: Error finding container dc4f577e7d9f97a3bc101bd338d33f88ddf8ccff4093b1ba1f0138aa2a8b3ca4: Status 404 returned error can't find the container with id dc4f577e7d9f97a3bc101bd338d33f88ddf8ccff4093b1ba1f0138aa2a8b3ca4 Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.186045 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.186125 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:16 crc kubenswrapper[5007]: E0218 19:53:16.186253 5007 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:53:16 crc kubenswrapper[5007]: E0218 19:53:16.186272 5007 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:53:16 crc kubenswrapper[5007]: E0218 19:53:16.186319 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:17.186300496 +0000 UTC m=+881.968420020 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "metrics-server-cert" not found Feb 18 19:53:16 crc kubenswrapper[5007]: E0218 19:53:16.186345 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:17.186330187 +0000 UTC m=+881.968449701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "webhook-server-cert" not found Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.310489 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.332740 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-jt656"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.388629 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:16 crc kubenswrapper[5007]: E0218 19:53:16.388843 5007 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:16 crc kubenswrapper[5007]: E0218 19:53:16.388906 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert podName:2ce1afe9-a5be-4d33-8755-5743475cdcff nodeName:}" failed. No retries permitted until 2026-02-18 19:53:18.388889077 +0000 UTC m=+883.171008601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert") pod "infra-operator-controller-manager-79d975b745-l4tvp" (UID: "2ce1afe9-a5be-4d33-8755-5743475cdcff") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.797321 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.797402 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4"] Feb 18 19:53:16 crc kubenswrapper[5007]: E0218 19:53:16.797476 5007 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:16 crc kubenswrapper[5007]: E0218 19:53:16.797523 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert podName:1d0172b2-fe1c-4265-a3cb-1c2f267223a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:18.797508625 +0000 UTC m=+883.579628149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" (UID: "1d0172b2-fe1c-4265-a3cb-1c2f267223a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.820341 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.876819 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.876900 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.876917 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.902525 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.908066 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.919615 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-lhhps"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.929289 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.946530 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.956676 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pc42v"] Feb 18 19:53:16 crc kubenswrapper[5007]: I0218 19:53:16.979181 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" event={"ID":"1a0be001-6181-4af3-9656-e48611b7c93c","Type":"ContainerStarted","Data":"50fa7d1d1de37488c5191ee96d4a117914d353cf67af1970a01ae71d82d8d7d8"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.009742 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" event={"ID":"39cb540d-ddc7-4d2f-a63b-2dd0c400ac01","Type":"ContainerStarted","Data":"9f1d691731e348961038346e90718e540309ae732d6e7d8f0321f30417be4fc2"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.037617 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" event={"ID":"e077b63c-904f-48a4-9363-a40876135623","Type":"ContainerStarted","Data":"6041b20a154c6a4a9ebdba7ec83d42110398f89a616b66636099c7593591c5b1"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.054545 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" event={"ID":"ebb4c943-3dac-4b30-805c-f40a96d848b6","Type":"ContainerStarted","Data":"dc4f577e7d9f97a3bc101bd338d33f88ddf8ccff4093b1ba1f0138aa2a8b3ca4"} Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.063460 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5lr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-pwdj9_openstack-operators(574dc561-0a0f-4118-b01f-954a238ba12c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.066417 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" podUID="574dc561-0a0f-4118-b01f-954a238ba12c" Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.080120 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" event={"ID":"951189e9-dedd-4c70-9608-a06110b98f8d","Type":"ContainerStarted","Data":"58d74ada0bad4b91fd4ce00abc1972d960cf5875eeb1c1b77b900e51d6b9c0e1"} Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.116902 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggzsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-pcsjz_openstack-operators(bb8aa80a-f91c-40cd-b007-1299afdbb054): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.117059 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jz58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-pc42v_openstack-operators(06ec4490-9542-41e8-bcab-6b3cab6107fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.117097 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4x4wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-lhhps_openstack-operators(d494444c-827a-44ff-9e45-c1097f39b8d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.118655 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" podUID="d494444c-827a-44ff-9e45-c1097f39b8d6" Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.118733 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" podUID="bb8aa80a-f91c-40cd-b007-1299afdbb054" Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.119001 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" event={"ID":"b4617470-b330-4c8a-bcc0-018bb10bc883","Type":"ContainerStarted","Data":"c7f68189223d9908356eebd742520352fa412b88bf381dc1327348b998c8a948"} Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.120900 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" podUID="06ec4490-9542-41e8-bcab-6b3cab6107fd" Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.125923 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" event={"ID":"280b4324-5c76-4574-bcc3-3f5d271efd2d","Type":"ContainerStarted","Data":"8eda2f9aa7f54bc580ebac550f23b9148e5e48d826fad1fc932b9c2db4527a84"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.128572 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" event={"ID":"6dcfa1b8-c3ec-45ce-9aba-0089081afae3","Type":"ContainerStarted","Data":"30532397e71e1b132b3811304127cb73ea87259076473e41345dabac2e598e5d"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.134165 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" event={"ID":"75eaa04a-0f88-4307-be6e-fe8d742145a9","Type":"ContainerStarted","Data":"fcadcdcc45b40f0d51d0c0f3825e119686160425188a00494d541decd850b20e"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.136137 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" event={"ID":"741852ae-b134-4f2f-81f5-358eeb0bcca4","Type":"ContainerStarted","Data":"f4c91e3979655b7e83e267bf9321b800e78fa2f7311e5f9050c208e0d9113369"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.140049 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" event={"ID":"65f35275-3e75-45d1-b4db-865b02e3c061","Type":"ContainerStarted","Data":"364b11dc8005619e4a062fcbb5c120ca0559ef5345911337a1338f20e8739c5b"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.143761 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" event={"ID":"8996f42c-6dec-44f2-9664-04debce79ae0","Type":"ContainerStarted","Data":"2f4626e17d1a544c21ef6e2d65183fffb3cda87d2277e2e3cc7f4225261d9c8f"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.147654 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" event={"ID":"3d3f1240-ac95-4de0-97ca-94f10ae93a38","Type":"ContainerStarted","Data":"5009907f98b9a06b35da60338ee1c74d49e7ec583d95e7860407a1ec311906bb"} Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.207181 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:17 crc kubenswrapper[5007]: I0218 19:53:17.207280 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.207490 5007 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.207747 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:19.207729049 +0000 UTC m=+883.989848573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "webhook-server-cert" not found Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.208437 5007 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:53:17 crc kubenswrapper[5007]: E0218 19:53:17.209128 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:19.209099028 +0000 UTC m=+883.991218742 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "metrics-server-cert" not found Feb 18 19:53:18 crc kubenswrapper[5007]: I0218 19:53:18.184042 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" event={"ID":"06ec4490-9542-41e8-bcab-6b3cab6107fd","Type":"ContainerStarted","Data":"6162c93e7ecce0dd22d361d729998f40a4a1fdb020ce3f341a912f2b38efdb3c"} Feb 18 19:53:18 crc kubenswrapper[5007]: I0218 19:53:18.187594 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" event={"ID":"574dc561-0a0f-4118-b01f-954a238ba12c","Type":"ContainerStarted","Data":"f57672162f92f3cb82e72e6933fe29c64e34ba3e96b4d167ff205ea69ef24fba"} Feb 18 19:53:18 crc kubenswrapper[5007]: E0218 19:53:18.188470 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" podUID="06ec4490-9542-41e8-bcab-6b3cab6107fd" Feb 18 19:53:18 crc kubenswrapper[5007]: E0218 19:53:18.191466 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" podUID="574dc561-0a0f-4118-b01f-954a238ba12c" Feb 18 19:53:18 crc kubenswrapper[5007]: I0218 19:53:18.194614 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" event={"ID":"bb8aa80a-f91c-40cd-b007-1299afdbb054","Type":"ContainerStarted","Data":"538d699b6933635a6eebaf26e534117792baaa9af3e6a4c647db6e3263d23857"} Feb 18 19:53:18 crc kubenswrapper[5007]: E0218 19:53:18.198074 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" podUID="bb8aa80a-f91c-40cd-b007-1299afdbb054" Feb 18 19:53:18 crc kubenswrapper[5007]: I0218 19:53:18.202760 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" event={"ID":"56389769-2657-410c-86dc-3dccb7de3836","Type":"ContainerStarted","Data":"a9417abfe3bf7b8ed9618c24387f9e41f51552ced92e6c18fcf8089cebcbacf8"} Feb 18 19:53:18 crc kubenswrapper[5007]: I0218 19:53:18.210170 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" event={"ID":"39d2439e-f2db-43ba-b3af-d6642b9d5d22","Type":"ContainerStarted","Data":"2e4d897832008df9d6cd9b1390669fcdc56e80059a89dac639ff3f5c9cee9918"} Feb 18 19:53:18 crc kubenswrapper[5007]: I0218 19:53:18.220561 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" event={"ID":"d494444c-827a-44ff-9e45-c1097f39b8d6","Type":"ContainerStarted","Data":"1651a120e68c907a7bb16d126deae128c4b1058fd37108fc48afd1d739ab1e89"} Feb 18 19:53:18 crc kubenswrapper[5007]: E0218 19:53:18.222269 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" podUID="d494444c-827a-44ff-9e45-c1097f39b8d6" Feb 18 19:53:18 crc kubenswrapper[5007]: I0218 19:53:18.433487 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:18 crc kubenswrapper[5007]: E0218 19:53:18.433633 5007 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:18 crc kubenswrapper[5007]: E0218 19:53:18.433708 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert podName:2ce1afe9-a5be-4d33-8755-5743475cdcff nodeName:}" failed. No retries permitted until 2026-02-18 19:53:22.433686777 +0000 UTC m=+887.215806301 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert") pod "infra-operator-controller-manager-79d975b745-l4tvp" (UID: "2ce1afe9-a5be-4d33-8755-5743475cdcff") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:18 crc kubenswrapper[5007]: I0218 19:53:18.840704 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:18 crc kubenswrapper[5007]: E0218 19:53:18.840892 5007 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:18 crc kubenswrapper[5007]: E0218 19:53:18.840942 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert podName:1d0172b2-fe1c-4265-a3cb-1c2f267223a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:22.840928767 +0000 UTC m=+887.623048291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" (UID: "1d0172b2-fe1c-4265-a3cb-1c2f267223a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:19 crc kubenswrapper[5007]: E0218 19:53:19.229757 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" podUID="bb8aa80a-f91c-40cd-b007-1299afdbb054" Feb 18 19:53:19 crc kubenswrapper[5007]: E0218 19:53:19.229816 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" podUID="574dc561-0a0f-4118-b01f-954a238ba12c" Feb 18 19:53:19 crc kubenswrapper[5007]: E0218 19:53:19.230131 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" podUID="d494444c-827a-44ff-9e45-c1097f39b8d6" Feb 18 19:53:19 crc kubenswrapper[5007]: E0218 19:53:19.235812 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" podUID="06ec4490-9542-41e8-bcab-6b3cab6107fd" Feb 18 19:53:19 crc kubenswrapper[5007]: I0218 19:53:19.248220 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:19 crc kubenswrapper[5007]: I0218 19:53:19.248383 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:19 crc kubenswrapper[5007]: E0218 19:53:19.248391 5007 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:53:19 crc kubenswrapper[5007]: E0218 19:53:19.248476 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:23.248460355 +0000 UTC m=+888.030579879 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "metrics-server-cert" not found Feb 18 19:53:19 crc kubenswrapper[5007]: E0218 19:53:19.248517 5007 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:53:19 crc kubenswrapper[5007]: E0218 19:53:19.248573 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:23.248558058 +0000 UTC m=+888.030677582 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "webhook-server-cert" not found Feb 18 19:53:22 crc kubenswrapper[5007]: I0218 19:53:22.502061 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:22 crc kubenswrapper[5007]: E0218 19:53:22.502306 5007 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:22 crc kubenswrapper[5007]: E0218 19:53:22.502422 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert podName:2ce1afe9-a5be-4d33-8755-5743475cdcff nodeName:}" failed. No retries permitted until 2026-02-18 19:53:30.502395338 +0000 UTC m=+895.284514902 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert") pod "infra-operator-controller-manager-79d975b745-l4tvp" (UID: "2ce1afe9-a5be-4d33-8755-5743475cdcff") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:22 crc kubenswrapper[5007]: I0218 19:53:22.907219 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:22 crc kubenswrapper[5007]: E0218 19:53:22.907486 5007 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:22 crc kubenswrapper[5007]: E0218 19:53:22.907722 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert podName:1d0172b2-fe1c-4265-a3cb-1c2f267223a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:30.907707374 +0000 UTC m=+895.689826898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" (UID: "1d0172b2-fe1c-4265-a3cb-1c2f267223a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:23 crc kubenswrapper[5007]: I0218 19:53:23.314656 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:23 crc kubenswrapper[5007]: I0218 19:53:23.314794 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:23 crc kubenswrapper[5007]: E0218 19:53:23.314833 5007 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:53:23 crc kubenswrapper[5007]: E0218 19:53:23.314899 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:31.314880961 +0000 UTC m=+896.097000485 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "webhook-server-cert" not found Feb 18 19:53:23 crc kubenswrapper[5007]: E0218 19:53:23.314905 5007 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:53:23 crc kubenswrapper[5007]: E0218 19:53:23.314941 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:31.314929952 +0000 UTC m=+896.097049476 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "metrics-server-cert" not found Feb 18 19:53:29 crc kubenswrapper[5007]: E0218 19:53:29.620833 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 18 19:53:29 crc kubenswrapper[5007]: E0218 19:53:29.621579 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whn74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-mlkjz_openstack-operators(b4617470-b330-4c8a-bcc0-018bb10bc883): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:29 crc kubenswrapper[5007]: E0218 19:53:29.622793 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" podUID="b4617470-b330-4c8a-bcc0-018bb10bc883" Feb 18 19:53:30 crc kubenswrapper[5007]: E0218 19:53:30.309576 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" podUID="b4617470-b330-4c8a-bcc0-018bb10bc883" Feb 18 19:53:30 crc kubenswrapper[5007]: E0218 19:53:30.398303 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 18 19:53:30 crc kubenswrapper[5007]: E0218 19:53:30.398502 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8nmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-fjdj8_openstack-operators(951189e9-dedd-4c70-9608-a06110b98f8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:30 crc kubenswrapper[5007]: E0218 19:53:30.399707 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" podUID="951189e9-dedd-4c70-9608-a06110b98f8d" Feb 18 19:53:30 crc kubenswrapper[5007]: I0218 19:53:30.599388 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:30 crc kubenswrapper[5007]: E0218 19:53:30.599559 5007 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:30 crc kubenswrapper[5007]: E0218 19:53:30.599650 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert podName:2ce1afe9-a5be-4d33-8755-5743475cdcff nodeName:}" failed. No retries permitted until 2026-02-18 19:53:46.599626775 +0000 UTC m=+911.381746329 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert") pod "infra-operator-controller-manager-79d975b745-l4tvp" (UID: "2ce1afe9-a5be-4d33-8755-5743475cdcff") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:53:30 crc kubenswrapper[5007]: E0218 19:53:30.995959 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:30.996140 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8vbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-ss4hw_openstack-operators(65f35275-3e75-45d1-b4db-865b02e3c061): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:30.997297 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" podUID="65f35275-3e75-45d1-b4db-865b02e3c061" Feb 18 19:53:31 crc kubenswrapper[5007]: I0218 19:53:31.008315 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.009089 5007 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.009142 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert podName:1d0172b2-fe1c-4265-a3cb-1c2f267223a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:47.009128189 +0000 UTC m=+911.791247713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" (UID: "1d0172b2-fe1c-4265-a3cb-1c2f267223a5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.317309 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" podUID="65f35275-3e75-45d1-b4db-865b02e3c061" Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.317646 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" podUID="951189e9-dedd-4c70-9608-a06110b98f8d" Feb 18 19:53:31 crc kubenswrapper[5007]: I0218 19:53:31.412873 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:31 crc kubenswrapper[5007]: I0218 19:53:31.412996 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.413129 5007 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.413167 5007 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.413212 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:47.413193149 +0000 UTC m=+912.195312673 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "webhook-server-cert" not found Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.413239 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs podName:29215188-681f-4991-9254-0ec04b455943 nodeName:}" failed. No retries permitted until 2026-02-18 19:53:47.41322157 +0000 UTC m=+912.195341144 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs") pod "openstack-operator-controller-manager-dd6d5466c-r8s8q" (UID: "29215188-681f-4991-9254-0ec04b455943") : secret "metrics-server-cert" not found Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.522952 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.523125 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44xpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-5fsf6_openstack-operators(56389769-2657-410c-86dc-3dccb7de3836): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:31 crc kubenswrapper[5007]: E0218 19:53:31.524436 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" podUID="56389769-2657-410c-86dc-3dccb7de3836" Feb 18 19:53:32 crc kubenswrapper[5007]: E0218 19:53:32.331269 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" podUID="56389769-2657-410c-86dc-3dccb7de3836" Feb 18 19:53:32 crc kubenswrapper[5007]: E0218 19:53:32.759740 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 18 19:53:32 crc kubenswrapper[5007]: E0218 19:53:32.760135 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-blmcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-lhcvb_openstack-operators(741852ae-b134-4f2f-81f5-358eeb0bcca4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:32 crc kubenswrapper[5007]: E0218 19:53:32.761385 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" podUID="741852ae-b134-4f2f-81f5-358eeb0bcca4" Feb 18 19:53:33 crc kubenswrapper[5007]: E0218 19:53:33.337127 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" podUID="741852ae-b134-4f2f-81f5-358eeb0bcca4" Feb 18 19:53:33 crc kubenswrapper[5007]: I0218 19:53:33.874989 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8rddb"] Feb 18 19:53:33 crc kubenswrapper[5007]: I0218 19:53:33.876474 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:33 crc kubenswrapper[5007]: I0218 19:53:33.882712 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rddb"] Feb 18 19:53:33 crc kubenswrapper[5007]: I0218 19:53:33.948538 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-catalog-content\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:33 crc kubenswrapper[5007]: I0218 19:53:33.948608 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-utilities\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:33 crc kubenswrapper[5007]: I0218 19:53:33.948661 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66p2m\" (UniqueName: \"kubernetes.io/projected/c44790e5-a554-4ae5-80a7-c3e785dd5918-kube-api-access-66p2m\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:34 crc kubenswrapper[5007]: I0218 19:53:34.049906 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-catalog-content\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:34 crc kubenswrapper[5007]: I0218 19:53:34.050294 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-utilities\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:34 crc kubenswrapper[5007]: I0218 19:53:34.050358 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66p2m\" (UniqueName: \"kubernetes.io/projected/c44790e5-a554-4ae5-80a7-c3e785dd5918-kube-api-access-66p2m\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:34 crc kubenswrapper[5007]: I0218 19:53:34.051557 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-catalog-content\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:34 crc kubenswrapper[5007]: I0218 19:53:34.051918 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-utilities\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:34 crc kubenswrapper[5007]: I0218 19:53:34.082937 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66p2m\" (UniqueName: \"kubernetes.io/projected/c44790e5-a554-4ae5-80a7-c3e785dd5918-kube-api-access-66p2m\") pod \"redhat-operators-8rddb\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:34 crc kubenswrapper[5007]: I0218 19:53:34.199996 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:53:34 crc kubenswrapper[5007]: E0218 19:53:34.792244 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 18 19:53:34 crc kubenswrapper[5007]: E0218 19:53:34.792702 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2ddg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-sfsc4_openstack-operators(3d3f1240-ac95-4de0-97ca-94f10ae93a38): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:34 crc kubenswrapper[5007]: E0218 19:53:34.793921 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" podUID="3d3f1240-ac95-4de0-97ca-94f10ae93a38" Feb 18 19:53:35 crc kubenswrapper[5007]: E0218 19:53:35.351114 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" podUID="3d3f1240-ac95-4de0-97ca-94f10ae93a38" Feb 18 19:53:35 crc kubenswrapper[5007]: E0218 19:53:35.504168 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 18 19:53:35 crc kubenswrapper[5007]: E0218 19:53:35.504336 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jwhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-49vtz_openstack-operators(1a0be001-6181-4af3-9656-e48611b7c93c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:35 crc kubenswrapper[5007]: E0218 19:53:35.505519 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" podUID="1a0be001-6181-4af3-9656-e48611b7c93c" Feb 18 19:53:36 crc kubenswrapper[5007]: E0218 19:53:36.041265 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 18 19:53:36 crc kubenswrapper[5007]: E0218 19:53:36.041443 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8btp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-8clwq_openstack-operators(6dcfa1b8-c3ec-45ce-9aba-0089081afae3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:36 crc kubenswrapper[5007]: E0218 19:53:36.042618 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" podUID="6dcfa1b8-c3ec-45ce-9aba-0089081afae3" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.263555 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lq8mk"] Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.265123 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.273145 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq8mk"] Feb 18 19:53:36 crc kubenswrapper[5007]: E0218 19:53:36.362178 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" podUID="1a0be001-6181-4af3-9656-e48611b7c93c" Feb 18 19:53:36 crc kubenswrapper[5007]: E0218 19:53:36.362255 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" podUID="6dcfa1b8-c3ec-45ce-9aba-0089081afae3" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.394557 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2vp\" (UniqueName: \"kubernetes.io/projected/cfaf3750-e081-4424-aec5-4fee304984bb-kube-api-access-kb2vp\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.394665 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-utilities\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.396243 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-catalog-content\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.497731 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2vp\" (UniqueName: \"kubernetes.io/projected/cfaf3750-e081-4424-aec5-4fee304984bb-kube-api-access-kb2vp\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.497788 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-utilities\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.497840 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-catalog-content\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.498363 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-catalog-content\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.498579 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-utilities\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.521291 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2vp\" (UniqueName: \"kubernetes.io/projected/cfaf3750-e081-4424-aec5-4fee304984bb-kube-api-access-kb2vp\") pod \"certified-operators-lq8mk\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: I0218 19:53:36.587362 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:36 crc kubenswrapper[5007]: E0218 19:53:36.887588 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 18 19:53:36 crc kubenswrapper[5007]: E0218 19:53:36.887801 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t8xqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-jt656_openstack-operators(75eaa04a-0f88-4307-be6e-fe8d742145a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:36 crc kubenswrapper[5007]: E0218 19:53:36.889687 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" podUID="75eaa04a-0f88-4307-be6e-fe8d742145a9" Feb 18 19:53:37 crc kubenswrapper[5007]: E0218 19:53:37.368692 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" podUID="75eaa04a-0f88-4307-be6e-fe8d742145a9" Feb 18 19:53:37 crc kubenswrapper[5007]: E0218 19:53:37.573935 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 18 19:53:37 crc kubenswrapper[5007]: E0218 19:53:37.574135 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-skjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-lmg2v_openstack-operators(280b4324-5c76-4574-bcc3-3f5d271efd2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:37 crc kubenswrapper[5007]: E0218 19:53:37.575303 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" podUID="280b4324-5c76-4574-bcc3-3f5d271efd2d" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.136713 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.137002 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m2gwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tlvn4_openstack-operators(39d2439e-f2db-43ba-b3af-d6642b9d5d22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.138254 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" podUID="39d2439e-f2db-43ba-b3af-d6642b9d5d22" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.210633 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.211056 5007 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.211208 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.80:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwsqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7f6db7f7d8-d772r_openstack-operators(39cb540d-ddc7-4d2f-a63b-2dd0c400ac01): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.212786 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" podUID="39cb540d-ddc7-4d2f-a63b-2dd0c400ac01" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.379983 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" podUID="280b4324-5c76-4574-bcc3-3f5d271efd2d" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.380118 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" podUID="39cb540d-ddc7-4d2f-a63b-2dd0c400ac01" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.385630 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" podUID="39d2439e-f2db-43ba-b3af-d6642b9d5d22" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.824695 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.824881 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6b27c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-ckbhj_openstack-operators(ebb4c943-3dac-4b30-805c-f40a96d848b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:38 crc kubenswrapper[5007]: E0218 19:53:38.826060 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" podUID="ebb4c943-3dac-4b30-805c-f40a96d848b6" Feb 18 19:53:39 crc kubenswrapper[5007]: E0218 19:53:39.391291 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" podUID="ebb4c943-3dac-4b30-805c-f40a96d848b6" Feb 18 19:53:41 crc kubenswrapper[5007]: E0218 19:53:41.637561 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 19:53:41 crc kubenswrapper[5007]: E0218 19:53:41.638109 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-p28vq_openstack-operators(8996f42c-6dec-44f2-9664-04debce79ae0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:41 crc kubenswrapper[5007]: E0218 19:53:41.640271 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" podUID="8996f42c-6dec-44f2-9664-04debce79ae0" Feb 18 19:53:41 crc kubenswrapper[5007]: I0218 19:53:41.920955 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq8mk"] Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.134978 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rddb"] Feb 18 19:53:42 crc kubenswrapper[5007]: W0218 19:53:42.144752 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44790e5_a554_4ae5_80a7_c3e785dd5918.slice/crio-11f0f56e174ac3a611ffe1e00b7c044ed9aa48d1800a78fde978a840c39f4fe0 WatchSource:0}: Error finding container 11f0f56e174ac3a611ffe1e00b7c044ed9aa48d1800a78fde978a840c39f4fe0: Status 404 returned error can't find the container with id 11f0f56e174ac3a611ffe1e00b7c044ed9aa48d1800a78fde978a840c39f4fe0 Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.410283 5007 generic.go:334] "Generic (PLEG): container finished" podID="cfaf3750-e081-4424-aec5-4fee304984bb" containerID="0f0c62eafebe9fb261c0244f0df006bb15fedac7a37a11c1a9f1136c62ce08ff" exitCode=0 Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.410324 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq8mk" event={"ID":"cfaf3750-e081-4424-aec5-4fee304984bb","Type":"ContainerDied","Data":"0f0c62eafebe9fb261c0244f0df006bb15fedac7a37a11c1a9f1136c62ce08ff"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.410393 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq8mk" event={"ID":"cfaf3750-e081-4424-aec5-4fee304984bb","Type":"ContainerStarted","Data":"b95b1d3f895df892c759f11889d7f6ae79e116edb2ca6b0e262429f7362c53e3"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.412053 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.413517 5007 generic.go:334] "Generic (PLEG): container finished" podID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerID="bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d" exitCode=0 Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.413577 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rddb" event={"ID":"c44790e5-a554-4ae5-80a7-c3e785dd5918","Type":"ContainerDied","Data":"bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.413610 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rddb" event={"ID":"c44790e5-a554-4ae5-80a7-c3e785dd5918","Type":"ContainerStarted","Data":"11f0f56e174ac3a611ffe1e00b7c044ed9aa48d1800a78fde978a840c39f4fe0"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.415066 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" event={"ID":"e077b63c-904f-48a4-9363-a40876135623","Type":"ContainerStarted","Data":"a977cd67f5205fd9634617b7fe4b9d7122d6334beadb0d36bb9bf891f5b3d175"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.415194 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.416988 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" event={"ID":"d494444c-827a-44ff-9e45-c1097f39b8d6","Type":"ContainerStarted","Data":"43f3962f38d9fafb705b60d2a22f2ada916947a03280bbc230e1ea231c78a9a2"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.417121 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.422079 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" event={"ID":"2f42c84b-2cc4-45c6-a9cb-0da6e2b4519e","Type":"ContainerStarted","Data":"35b90fa5592001f5600be8eac5bdc0d5d1875c58341d6c930c5b5623fd1f4c97"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.422134 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.423895 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" event={"ID":"06ec4490-9542-41e8-bcab-6b3cab6107fd","Type":"ContainerStarted","Data":"49791ad44aab5eba280baa88fff149be439a0e4ac9622d8103967c951b70bdb5"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.424039 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.425170 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" event={"ID":"574dc561-0a0f-4118-b01f-954a238ba12c","Type":"ContainerStarted","Data":"c5d8ac5310928046804a885d6a3f072b15e1f17d59eb8a990e0755b38820ab90"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.425308 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.426244 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" event={"ID":"bb8aa80a-f91c-40cd-b007-1299afdbb054","Type":"ContainerStarted","Data":"1eaab934a4e63a68b57835f39d0812c4162ceea68cb0f3e66edee05d7ac1cda8"} Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.426380 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" Feb 18 19:53:42 crc kubenswrapper[5007]: E0218 19:53:42.427077 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" podUID="8996f42c-6dec-44f2-9664-04debce79ae0" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.480623 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" podStartSLOduration=3.913212281 podStartE2EDuration="28.480607058s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:17.116676098 +0000 UTC m=+881.898795622" lastFinishedPulling="2026-02-18 19:53:41.684070865 +0000 UTC m=+906.466190399" observedRunningTime="2026-02-18 19:53:42.479891888 +0000 UTC m=+907.262011422" watchObservedRunningTime="2026-02-18 19:53:42.480607058 +0000 UTC m=+907.262726582" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.507654 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" podStartSLOduration=3.958633113 podStartE2EDuration="28.507637471s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:17.116825072 +0000 UTC m=+881.898944596" lastFinishedPulling="2026-02-18 19:53:41.66582941 +0000 UTC m=+906.447948954" observedRunningTime="2026-02-18 19:53:42.502522077 +0000 UTC m=+907.284641611" watchObservedRunningTime="2026-02-18 19:53:42.507637471 +0000 UTC m=+907.289756995" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.522127 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" podStartSLOduration=3.867656374 podStartE2EDuration="28.52211168s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:17.063239459 +0000 UTC m=+881.845358983" lastFinishedPulling="2026-02-18 19:53:41.717694765 +0000 UTC m=+906.499814289" observedRunningTime="2026-02-18 19:53:42.520169565 +0000 UTC m=+907.302289089" watchObservedRunningTime="2026-02-18 19:53:42.52211168 +0000 UTC m=+907.304231204" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.560592 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" podStartSLOduration=5.262583333 podStartE2EDuration="28.560571936s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.074608502 +0000 UTC m=+880.856728026" lastFinishedPulling="2026-02-18 19:53:39.372597105 +0000 UTC m=+904.154716629" observedRunningTime="2026-02-18 19:53:42.538109472 +0000 UTC m=+907.320228986" watchObservedRunningTime="2026-02-18 19:53:42.560571936 +0000 UTC m=+907.342691460" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.564338 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" podStartSLOduration=4.866971262 podStartE2EDuration="28.564328722s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:15.675121582 +0000 UTC m=+880.457241106" lastFinishedPulling="2026-02-18 19:53:39.372479042 +0000 UTC m=+904.154598566" observedRunningTime="2026-02-18 19:53:42.559912017 +0000 UTC m=+907.342031541" watchObservedRunningTime="2026-02-18 19:53:42.564328722 +0000 UTC m=+907.346448246" Feb 18 19:53:42 crc kubenswrapper[5007]: I0218 19:53:42.599969 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" podStartSLOduration=3.999363853 podStartE2EDuration="28.599952138s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:17.116996857 +0000 UTC m=+881.899116381" lastFinishedPulling="2026-02-18 19:53:41.717585152 +0000 UTC m=+906.499704666" observedRunningTime="2026-02-18 19:53:42.59826522 +0000 UTC m=+907.380384764" watchObservedRunningTime="2026-02-18 19:53:42.599952138 +0000 UTC m=+907.382071662" Feb 18 19:53:43 crc kubenswrapper[5007]: I0218 19:53:43.434839 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rddb" event={"ID":"c44790e5-a554-4ae5-80a7-c3e785dd5918","Type":"ContainerStarted","Data":"f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424"} Feb 18 19:53:43 crc kubenswrapper[5007]: I0218 19:53:43.437294 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq8mk" event={"ID":"cfaf3750-e081-4424-aec5-4fee304984bb","Type":"ContainerStarted","Data":"29fe5dff7b45af4c92aae760315d3b56a3062f82973314792b7e305478c24fee"} Feb 18 19:53:44 crc kubenswrapper[5007]: I0218 19:53:44.447683 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" event={"ID":"65f35275-3e75-45d1-b4db-865b02e3c061","Type":"ContainerStarted","Data":"00a0428c4db0b076c98965d6e2105d560e92afe8f72d64811c84dff95ad73356"} Feb 18 19:53:44 crc kubenswrapper[5007]: I0218 19:53:44.448202 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" Feb 18 19:53:44 crc kubenswrapper[5007]: I0218 19:53:44.457032 5007 generic.go:334] "Generic (PLEG): container finished" podID="cfaf3750-e081-4424-aec5-4fee304984bb" containerID="29fe5dff7b45af4c92aae760315d3b56a3062f82973314792b7e305478c24fee" exitCode=0 Feb 18 19:53:44 crc kubenswrapper[5007]: I0218 19:53:44.457591 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq8mk" event={"ID":"cfaf3750-e081-4424-aec5-4fee304984bb","Type":"ContainerDied","Data":"29fe5dff7b45af4c92aae760315d3b56a3062f82973314792b7e305478c24fee"} Feb 18 19:53:44 crc kubenswrapper[5007]: I0218 19:53:44.472495 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" podStartSLOduration=3.290882527 podStartE2EDuration="30.472474715s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.173390721 +0000 UTC m=+880.955510245" lastFinishedPulling="2026-02-18 19:53:43.354982909 +0000 UTC m=+908.137102433" observedRunningTime="2026-02-18 19:53:44.471369983 +0000 UTC m=+909.253489507" watchObservedRunningTime="2026-02-18 19:53:44.472474715 +0000 UTC m=+909.254594259" Feb 18 19:53:45 crc kubenswrapper[5007]: I0218 19:53:45.471390 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq8mk" event={"ID":"cfaf3750-e081-4424-aec5-4fee304984bb","Type":"ContainerStarted","Data":"9d85b727f1830984ea1d99a2c7ab61f44e7816b60b93bbe42467bd8843dab42a"} Feb 18 19:53:45 crc kubenswrapper[5007]: I0218 19:53:45.473101 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" event={"ID":"951189e9-dedd-4c70-9608-a06110b98f8d","Type":"ContainerStarted","Data":"b44ab4a687d63eb53a299e726ce7064ddb71c47c1e90667870c6d60a4a4f58cc"} Feb 18 19:53:45 crc kubenswrapper[5007]: I0218 19:53:45.473324 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" Feb 18 19:53:45 crc kubenswrapper[5007]: I0218 19:53:45.498888 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lq8mk" podStartSLOduration=7.019360432 podStartE2EDuration="9.498870737s" podCreationTimestamp="2026-02-18 19:53:36 +0000 UTC" firstStartedPulling="2026-02-18 19:53:42.411858297 +0000 UTC m=+907.193977811" lastFinishedPulling="2026-02-18 19:53:44.891368582 +0000 UTC m=+909.673488116" observedRunningTime="2026-02-18 19:53:45.493983519 +0000 UTC m=+910.276103043" watchObservedRunningTime="2026-02-18 19:53:45.498870737 +0000 UTC m=+910.280990261" Feb 18 19:53:45 crc kubenswrapper[5007]: I0218 19:53:45.513038 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" podStartSLOduration=3.366150781 podStartE2EDuration="31.513015096s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.178378922 +0000 UTC m=+880.960498446" lastFinishedPulling="2026-02-18 19:53:44.325243237 +0000 UTC m=+909.107362761" observedRunningTime="2026-02-18 19:53:45.50817076 +0000 UTC m=+910.290290294" watchObservedRunningTime="2026-02-18 19:53:45.513015096 +0000 UTC m=+910.295134630" Feb 18 19:53:45 crc kubenswrapper[5007]: I0218 19:53:45.664645 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:53:45 crc kubenswrapper[5007]: I0218 19:53:45.664715 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.008476 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-lhhps" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.487377 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" event={"ID":"56389769-2657-410c-86dc-3dccb7de3836","Type":"ContainerStarted","Data":"2c45f48dd757ce15fdc061dea9ba09549e15f835c3b8e4d344005c26e89ade69"} Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.487904 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.489861 5007 generic.go:334] "Generic (PLEG): container finished" podID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerID="f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424" exitCode=0 Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.489935 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rddb" event={"ID":"c44790e5-a554-4ae5-80a7-c3e785dd5918","Type":"ContainerDied","Data":"f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424"} Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.530268 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" podStartSLOduration=4.185698746 podStartE2EDuration="32.530232511s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:17.03706904 +0000 UTC m=+881.819188564" lastFinishedPulling="2026-02-18 19:53:45.381602805 +0000 UTC m=+910.163722329" observedRunningTime="2026-02-18 19:53:46.509065763 +0000 UTC m=+911.291185337" watchObservedRunningTime="2026-02-18 19:53:46.530232511 +0000 UTC m=+911.312352035" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.588847 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.589049 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.693374 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.700862 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ce1afe9-a5be-4d33-8755-5743475cdcff-cert\") pod \"infra-operator-controller-manager-79d975b745-l4tvp\" (UID: \"2ce1afe9-a5be-4d33-8755-5743475cdcff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.798705 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8wbc2" Feb 18 19:53:46 crc kubenswrapper[5007]: I0218 19:53:46.806925 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:47 crc kubenswrapper[5007]: W0218 19:53:47.064759 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce1afe9_a5be_4d33_8755_5743475cdcff.slice/crio-8ef35378e329cd7c9f399539ff0d3a866bc57c39589f07aba51ce57c12fd1b60 WatchSource:0}: Error finding container 8ef35378e329cd7c9f399539ff0d3a866bc57c39589f07aba51ce57c12fd1b60: Status 404 returned error can't find the container with id 8ef35378e329cd7c9f399539ff0d3a866bc57c39589f07aba51ce57c12fd1b60 Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.067193 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp"] Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.100345 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.106404 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d0172b2-fe1c-4265-a3cb-1c2f267223a5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr\" (UID: \"1d0172b2-fe1c-4265-a3cb-1c2f267223a5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.336790 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wsc6z" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.345020 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.517179 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.517235 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" event={"ID":"2ce1afe9-a5be-4d33-8755-5743475cdcff","Type":"ContainerStarted","Data":"8ef35378e329cd7c9f399539ff0d3a866bc57c39589f07aba51ce57c12fd1b60"} Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.517291 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.521160 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-metrics-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.526838 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/29215188-681f-4991-9254-0ec04b455943-webhook-certs\") pod \"openstack-operator-controller-manager-dd6d5466c-r8s8q\" (UID: \"29215188-681f-4991-9254-0ec04b455943\") " pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.606243 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gqwt7" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.614334 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:47 crc kubenswrapper[5007]: I0218 19:53:47.630534 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lq8mk" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="registry-server" probeResult="failure" output=< Feb 18 19:53:47 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 19:53:47 crc kubenswrapper[5007]: > Feb 18 19:53:48 crc kubenswrapper[5007]: I0218 19:53:48.610588 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr"] Feb 18 19:53:48 crc kubenswrapper[5007]: W0218 19:53:48.623694 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d0172b2_fe1c_4265_a3cb_1c2f267223a5.slice/crio-3809833e84aa6f007ebca368728ba3bc3304f6dd88d1a62b30ce55065611c2f4 WatchSource:0}: Error finding container 3809833e84aa6f007ebca368728ba3bc3304f6dd88d1a62b30ce55065611c2f4: Status 404 returned error can't find the container with id 3809833e84aa6f007ebca368728ba3bc3304f6dd88d1a62b30ce55065611c2f4 Feb 18 19:53:48 crc kubenswrapper[5007]: I0218 19:53:48.692213 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q"] Feb 18 19:53:48 crc kubenswrapper[5007]: W0218 19:53:48.696350 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29215188_681f_4991_9254_0ec04b455943.slice/crio-8ad437693bf340f06d996f8c0ea3aba1a06d62a2e0db17f24bf638f313dfc824 WatchSource:0}: Error finding container 8ad437693bf340f06d996f8c0ea3aba1a06d62a2e0db17f24bf638f313dfc824: Status 404 returned error can't find the container with id 8ad437693bf340f06d996f8c0ea3aba1a06d62a2e0db17f24bf638f313dfc824 Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.524117 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sjl6z"] Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.526122 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.535404 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjl6z"] Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.556250 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" event={"ID":"1d0172b2-fe1c-4265-a3cb-1c2f267223a5","Type":"ContainerStarted","Data":"3809833e84aa6f007ebca368728ba3bc3304f6dd88d1a62b30ce55065611c2f4"} Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.557419 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" event={"ID":"29215188-681f-4991-9254-0ec04b455943","Type":"ContainerStarted","Data":"8ad437693bf340f06d996f8c0ea3aba1a06d62a2e0db17f24bf638f313dfc824"} Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.662423 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-catalog-content\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.662487 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-utilities\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.662545 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jr4\" (UniqueName: \"kubernetes.io/projected/c5e40f93-0889-414e-aac3-710e9fbd2ade-kube-api-access-w8jr4\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.763606 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jr4\" (UniqueName: \"kubernetes.io/projected/c5e40f93-0889-414e-aac3-710e9fbd2ade-kube-api-access-w8jr4\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.763707 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-catalog-content\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.763727 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-utilities\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.764177 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-utilities\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.764228 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-catalog-content\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.785421 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jr4\" (UniqueName: \"kubernetes.io/projected/c5e40f93-0889-414e-aac3-710e9fbd2ade-kube-api-access-w8jr4\") pod \"redhat-marketplace-sjl6z\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:49 crc kubenswrapper[5007]: I0218 19:53:49.853500 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:53:50 crc kubenswrapper[5007]: I0218 19:53:50.293775 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjl6z"] Feb 18 19:53:50 crc kubenswrapper[5007]: W0218 19:53:50.308625 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5e40f93_0889_414e_aac3_710e9fbd2ade.slice/crio-f810659b1a1d9d6b6380b6342b2185f02a2d89b1b0032da5c554040afd5e6d4d WatchSource:0}: Error finding container f810659b1a1d9d6b6380b6342b2185f02a2d89b1b0032da5c554040afd5e6d4d: Status 404 returned error can't find the container with id f810659b1a1d9d6b6380b6342b2185f02a2d89b1b0032da5c554040afd5e6d4d Feb 18 19:53:50 crc kubenswrapper[5007]: I0218 19:53:50.564115 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjl6z" event={"ID":"c5e40f93-0889-414e-aac3-710e9fbd2ade","Type":"ContainerStarted","Data":"f810659b1a1d9d6b6380b6342b2185f02a2d89b1b0032da5c554040afd5e6d4d"} Feb 18 19:53:54 crc kubenswrapper[5007]: I0218 19:53:54.587785 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g8hqj" Feb 18 19:53:54 crc kubenswrapper[5007]: I0218 19:53:54.658163 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-m7x7h" Feb 18 19:53:54 crc kubenswrapper[5007]: I0218 19:53:54.723473 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-fjdj8" Feb 18 19:53:54 crc kubenswrapper[5007]: I0218 19:53:54.834246 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ss4hw" Feb 18 19:53:55 crc kubenswrapper[5007]: I0218 19:53:55.341140 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-5fsf6" Feb 18 19:53:55 crc kubenswrapper[5007]: I0218 19:53:55.455082 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pcsjz" Feb 18 19:53:55 crc kubenswrapper[5007]: I0218 19:53:55.702454 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pc42v" Feb 18 19:53:55 crc kubenswrapper[5007]: I0218 19:53:55.936591 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-pwdj9" Feb 18 19:53:56 crc kubenswrapper[5007]: I0218 19:53:56.620154 5007 generic.go:334] "Generic (PLEG): container finished" podID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerID="520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c" exitCode=0 Feb 18 19:53:56 crc kubenswrapper[5007]: I0218 19:53:56.620324 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjl6z" event={"ID":"c5e40f93-0889-414e-aac3-710e9fbd2ade","Type":"ContainerDied","Data":"520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c"} Feb 18 19:53:56 crc kubenswrapper[5007]: I0218 19:53:56.622168 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" event={"ID":"29215188-681f-4991-9254-0ec04b455943","Type":"ContainerStarted","Data":"321d3e92ba4afabfb85a0e5275f05463d5898c5f99f4aee83abadcaee120979e"} Feb 18 19:53:56 crc kubenswrapper[5007]: I0218 19:53:56.640123 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:56 crc kubenswrapper[5007]: I0218 19:53:56.690930 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:53:57 crc kubenswrapper[5007]: I0218 19:53:57.175712 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq8mk"] Feb 18 19:53:57 crc kubenswrapper[5007]: I0218 19:53:57.645162 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:53:57 crc kubenswrapper[5007]: I0218 19:53:57.708582 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" podStartSLOduration=42.708548951 podStartE2EDuration="42.708548951s" podCreationTimestamp="2026-02-18 19:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:57.707738098 +0000 UTC m=+922.489857622" watchObservedRunningTime="2026-02-18 19:53:57.708548951 +0000 UTC m=+922.490668495" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.676765 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" event={"ID":"39cb540d-ddc7-4d2f-a63b-2dd0c400ac01","Type":"ContainerStarted","Data":"e52e764ee8497d18bd973531636e84d44652690deb1e1bfdefdca27bb9fc82c0"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.677752 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.681571 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" event={"ID":"ebb4c943-3dac-4b30-805c-f40a96d848b6","Type":"ContainerStarted","Data":"25c098ba87b8efd2850f8ac336d9282cfcaf9f6d6803f31889a0140b182b37ba"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.682134 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.688985 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" event={"ID":"2ce1afe9-a5be-4d33-8755-5743475cdcff","Type":"ContainerStarted","Data":"0ab860c906cb2f4c68a3f5b66ac15f2671aa2a0ea76cd22a8db80d2c8780a3e5"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.689484 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.692634 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" event={"ID":"6dcfa1b8-c3ec-45ce-9aba-0089081afae3","Type":"ContainerStarted","Data":"83d75956dd2d03a6c8360ea542dd4441c8ef63adca916cabcdeb9b11c084e6fc"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.693304 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.716481 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" event={"ID":"39d2439e-f2db-43ba-b3af-d6642b9d5d22","Type":"ContainerStarted","Data":"9a505ca27b57c198b0869d1a7b48df21cdbe44ee68fe94088f480efadcaa2f20"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.734873 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" event={"ID":"8996f42c-6dec-44f2-9664-04debce79ae0","Type":"ContainerStarted","Data":"85938c9f9452f4c7ab0b5e4ce8aaf73208fe46fe3ef6426b2eb9b41b0ed70421"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.735275 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.744864 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" event={"ID":"280b4324-5c76-4574-bcc3-3f5d271efd2d","Type":"ContainerStarted","Data":"5bdf1b8ae66a6aecfb0095e12a28769993769c13b7bad9e9bd7a6a7c43a9d22d"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.745704 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.755165 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" event={"ID":"1a0be001-6181-4af3-9656-e48611b7c93c","Type":"ContainerStarted","Data":"80dda40d56a2671f9549de02e542f5426281b7a0b0ec9a426e6d5d0e3e40ba58"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.755535 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.760809 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" podStartSLOduration=4.085740572 podStartE2EDuration="44.760789064s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.940627866 +0000 UTC m=+881.722747390" lastFinishedPulling="2026-02-18 19:53:57.615676358 +0000 UTC m=+922.397795882" observedRunningTime="2026-02-18 19:53:58.758386886 +0000 UTC m=+923.540506430" watchObservedRunningTime="2026-02-18 19:53:58.760789064 +0000 UTC m=+923.542908588" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.777690 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rddb" event={"ID":"c44790e5-a554-4ae5-80a7-c3e785dd5918","Type":"ContainerStarted","Data":"b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.792662 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" event={"ID":"3d3f1240-ac95-4de0-97ca-94f10ae93a38","Type":"ContainerStarted","Data":"04ce16512b6a8dddf6206e79f9049ed9152a6dd001e26ffaedf9ff7b1aff03e7"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.793419 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.808372 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" event={"ID":"741852ae-b134-4f2f-81f5-358eeb0bcca4","Type":"ContainerStarted","Data":"0115299a53f5d738bf9c1abd222fb6518db5968baaa90261328f7c02b0444075"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.809838 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.830930 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" event={"ID":"b4617470-b330-4c8a-bcc0-018bb10bc883","Type":"ContainerStarted","Data":"f03b63a05c03a1c23bf3e712e45a7f6b41933e6a5212986c128afca940d6a8f1"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.832115 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.855331 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" podStartSLOduration=3.226084347 podStartE2EDuration="44.855304753s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:15.98847579 +0000 UTC m=+880.770595314" lastFinishedPulling="2026-02-18 19:53:57.617696196 +0000 UTC m=+922.399815720" observedRunningTime="2026-02-18 19:53:58.836677047 +0000 UTC m=+923.618796581" watchObservedRunningTime="2026-02-18 19:53:58.855304753 +0000 UTC m=+923.637424277" Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.859934 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lq8mk" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="registry-server" containerID="cri-o://9d85b727f1830984ea1d99a2c7ab61f44e7816b60b93bbe42467bd8843dab42a" gracePeriod=2 Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.860284 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" event={"ID":"75eaa04a-0f88-4307-be6e-fe8d742145a9","Type":"ContainerStarted","Data":"29e418cbd77ae90e4c6acbd701a23466758c201a56e65ade17b69edbfa408cde"} Feb 18 19:53:58 crc kubenswrapper[5007]: I0218 19:53:58.861074 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.010244 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" podStartSLOduration=4.335314279 podStartE2EDuration="45.010223797s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.941255214 +0000 UTC m=+881.723374738" lastFinishedPulling="2026-02-18 19:53:57.616164732 +0000 UTC m=+922.398284256" observedRunningTime="2026-02-18 19:53:58.906826257 +0000 UTC m=+923.688945791" watchObservedRunningTime="2026-02-18 19:53:59.010223797 +0000 UTC m=+923.792343311" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.062331 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" podStartSLOduration=3.623701975 podStartE2EDuration="45.062314138s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.178546287 +0000 UTC m=+880.960665811" lastFinishedPulling="2026-02-18 19:53:57.61715845 +0000 UTC m=+922.399277974" observedRunningTime="2026-02-18 19:53:59.058961053 +0000 UTC m=+923.841080587" watchObservedRunningTime="2026-02-18 19:53:59.062314138 +0000 UTC m=+923.844433662" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.063408 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlvn4" podStartSLOduration=3.458698763 podStartE2EDuration="44.063402678s" podCreationTimestamp="2026-02-18 19:53:15 +0000 UTC" firstStartedPulling="2026-02-18 19:53:17.003491272 +0000 UTC m=+881.785610796" lastFinishedPulling="2026-02-18 19:53:57.608195187 +0000 UTC m=+922.390314711" observedRunningTime="2026-02-18 19:53:59.010996619 +0000 UTC m=+923.793116153" watchObservedRunningTime="2026-02-18 19:53:59.063402678 +0000 UTC m=+923.845522202" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.096031 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" podStartSLOduration=4.413090675 podStartE2EDuration="45.096013419s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.931824378 +0000 UTC m=+881.713943912" lastFinishedPulling="2026-02-18 19:53:57.614747132 +0000 UTC m=+922.396866656" observedRunningTime="2026-02-18 19:53:59.086959724 +0000 UTC m=+923.869079248" watchObservedRunningTime="2026-02-18 19:53:59.096013419 +0000 UTC m=+923.878132943" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.122214 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" podStartSLOduration=34.625883607 podStartE2EDuration="45.122197819s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:47.068210792 +0000 UTC m=+911.850330316" lastFinishedPulling="2026-02-18 19:53:57.564525004 +0000 UTC m=+922.346644528" observedRunningTime="2026-02-18 19:53:59.116820457 +0000 UTC m=+923.898939981" watchObservedRunningTime="2026-02-18 19:53:59.122197819 +0000 UTC m=+923.904317343" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.160223 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" podStartSLOduration=4.505421343 podStartE2EDuration="45.160206082s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.941020978 +0000 UTC m=+881.723140502" lastFinishedPulling="2026-02-18 19:53:57.595805717 +0000 UTC m=+922.377925241" observedRunningTime="2026-02-18 19:53:59.157154126 +0000 UTC m=+923.939273650" watchObservedRunningTime="2026-02-18 19:53:59.160206082 +0000 UTC m=+923.942325606" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.201341 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" podStartSLOduration=3.689500152 podStartE2EDuration="45.201323053s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.007537368 +0000 UTC m=+880.789656892" lastFinishedPulling="2026-02-18 19:53:57.519360269 +0000 UTC m=+922.301479793" observedRunningTime="2026-02-18 19:53:59.197392392 +0000 UTC m=+923.979511906" watchObservedRunningTime="2026-02-18 19:53:59.201323053 +0000 UTC m=+923.983442577" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.220985 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" podStartSLOduration=3.944519204 podStartE2EDuration="45.220969358s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.341300583 +0000 UTC m=+881.123420107" lastFinishedPulling="2026-02-18 19:53:57.617750737 +0000 UTC m=+922.399870261" observedRunningTime="2026-02-18 19:53:59.220545106 +0000 UTC m=+924.002664630" watchObservedRunningTime="2026-02-18 19:53:59.220969358 +0000 UTC m=+924.003088872" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.261140 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" podStartSLOduration=4.467315636 podStartE2EDuration="45.261121962s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.822198322 +0000 UTC m=+881.604317846" lastFinishedPulling="2026-02-18 19:53:57.616004648 +0000 UTC m=+922.398124172" observedRunningTime="2026-02-18 19:53:59.257117539 +0000 UTC m=+924.039237063" watchObservedRunningTime="2026-02-18 19:53:59.261121962 +0000 UTC m=+924.043241486" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.298754 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" podStartSLOduration=4.012585556 podStartE2EDuration="45.298738664s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:16.320635969 +0000 UTC m=+881.102755493" lastFinishedPulling="2026-02-18 19:53:57.606789077 +0000 UTC m=+922.388908601" observedRunningTime="2026-02-18 19:53:59.295930965 +0000 UTC m=+924.078050489" watchObservedRunningTime="2026-02-18 19:53:59.298738664 +0000 UTC m=+924.080858188" Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.880135 5007 generic.go:334] "Generic (PLEG): container finished" podID="cfaf3750-e081-4424-aec5-4fee304984bb" containerID="9d85b727f1830984ea1d99a2c7ab61f44e7816b60b93bbe42467bd8843dab42a" exitCode=0 Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.880584 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq8mk" event={"ID":"cfaf3750-e081-4424-aec5-4fee304984bb","Type":"ContainerDied","Data":"9d85b727f1830984ea1d99a2c7ab61f44e7816b60b93bbe42467bd8843dab42a"} Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.882202 5007 generic.go:334] "Generic (PLEG): container finished" podID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerID="efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e" exitCode=0 Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.882989 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjl6z" event={"ID":"c5e40f93-0889-414e-aac3-710e9fbd2ade","Type":"ContainerDied","Data":"efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e"} Feb 18 19:53:59 crc kubenswrapper[5007]: I0218 19:53:59.914024 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8rddb" podStartSLOduration=11.986074597 podStartE2EDuration="26.914010068s" podCreationTimestamp="2026-02-18 19:53:33 +0000 UTC" firstStartedPulling="2026-02-18 19:53:42.414864492 +0000 UTC m=+907.196984006" lastFinishedPulling="2026-02-18 19:53:57.342799953 +0000 UTC m=+922.124919477" observedRunningTime="2026-02-18 19:53:59.329344948 +0000 UTC m=+924.111464472" watchObservedRunningTime="2026-02-18 19:53:59.914010068 +0000 UTC m=+924.696129592" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.166276 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.235631 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb2vp\" (UniqueName: \"kubernetes.io/projected/cfaf3750-e081-4424-aec5-4fee304984bb-kube-api-access-kb2vp\") pod \"cfaf3750-e081-4424-aec5-4fee304984bb\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.235684 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-utilities\") pod \"cfaf3750-e081-4424-aec5-4fee304984bb\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.235817 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-catalog-content\") pod \"cfaf3750-e081-4424-aec5-4fee304984bb\" (UID: \"cfaf3750-e081-4424-aec5-4fee304984bb\") " Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.237141 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-utilities" (OuterVolumeSpecName: "utilities") pod "cfaf3750-e081-4424-aec5-4fee304984bb" (UID: "cfaf3750-e081-4424-aec5-4fee304984bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.241791 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfaf3750-e081-4424-aec5-4fee304984bb-kube-api-access-kb2vp" (OuterVolumeSpecName: "kube-api-access-kb2vp") pod "cfaf3750-e081-4424-aec5-4fee304984bb" (UID: "cfaf3750-e081-4424-aec5-4fee304984bb"). InnerVolumeSpecName "kube-api-access-kb2vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.291764 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfaf3750-e081-4424-aec5-4fee304984bb" (UID: "cfaf3750-e081-4424-aec5-4fee304984bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.337902 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.337950 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb2vp\" (UniqueName: \"kubernetes.io/projected/cfaf3750-e081-4424-aec5-4fee304984bb-kube-api-access-kb2vp\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.337965 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfaf3750-e081-4424-aec5-4fee304984bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.899243 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjl6z" event={"ID":"c5e40f93-0889-414e-aac3-710e9fbd2ade","Type":"ContainerStarted","Data":"27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f"} Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.901190 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" event={"ID":"1d0172b2-fe1c-4265-a3cb-1c2f267223a5","Type":"ContainerStarted","Data":"060650e8a9b3f6f32683e2597f0dce10411c9507ec25e6ca1a753d29226f9c2f"} Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.901337 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.903634 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq8mk" event={"ID":"cfaf3750-e081-4424-aec5-4fee304984bb","Type":"ContainerDied","Data":"b95b1d3f895df892c759f11889d7f6ae79e116edb2ca6b0e262429f7362c53e3"} Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.903659 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq8mk" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.903693 5007 scope.go:117] "RemoveContainer" containerID="9d85b727f1830984ea1d99a2c7ab61f44e7816b60b93bbe42467bd8843dab42a" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.921763 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sjl6z" podStartSLOduration=9.104629435 podStartE2EDuration="12.921741752s" podCreationTimestamp="2026-02-18 19:53:49 +0000 UTC" firstStartedPulling="2026-02-18 19:53:57.675827687 +0000 UTC m=+922.457947201" lastFinishedPulling="2026-02-18 19:54:01.492940004 +0000 UTC m=+926.275059518" observedRunningTime="2026-02-18 19:54:01.918077869 +0000 UTC m=+926.700197403" watchObservedRunningTime="2026-02-18 19:54:01.921741752 +0000 UTC m=+926.703861276" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.936888 5007 scope.go:117] "RemoveContainer" containerID="29fe5dff7b45af4c92aae760315d3b56a3062f82973314792b7e305478c24fee" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.957169 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" podStartSLOduration=35.498039615 podStartE2EDuration="47.957152222s" podCreationTimestamp="2026-02-18 19:53:14 +0000 UTC" firstStartedPulling="2026-02-18 19:53:48.626232837 +0000 UTC m=+913.408352351" lastFinishedPulling="2026-02-18 19:54:01.085345434 +0000 UTC m=+925.867464958" observedRunningTime="2026-02-18 19:54:01.952901562 +0000 UTC m=+926.735021096" watchObservedRunningTime="2026-02-18 19:54:01.957152222 +0000 UTC m=+926.739271746" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.958546 5007 scope.go:117] "RemoveContainer" containerID="0f0c62eafebe9fb261c0244f0df006bb15fedac7a37a11c1a9f1136c62ce08ff" Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.974109 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq8mk"] Feb 18 19:54:01 crc kubenswrapper[5007]: I0218 19:54:01.980397 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lq8mk"] Feb 18 19:54:03 crc kubenswrapper[5007]: I0218 19:54:03.918614 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" path="/var/lib/kubelet/pods/cfaf3750-e081-4424-aec5-4fee304984bb/volumes" Feb 18 19:54:04 crc kubenswrapper[5007]: I0218 19:54:04.200977 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:54:04 crc kubenswrapper[5007]: I0218 19:54:04.201218 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:54:04 crc kubenswrapper[5007]: I0218 19:54:04.602534 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-mlkjz" Feb 18 19:54:04 crc kubenswrapper[5007]: I0218 19:54:04.643546 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-lmg2v" Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.046270 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-lhcvb" Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.132780 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ckbhj" Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.207390 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jt656" Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.255151 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rddb" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="registry-server" probeResult="failure" output=< Feb 18 19:54:05 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 19:54:05 crc kubenswrapper[5007]: > Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.275311 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-49vtz" Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.325874 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-8clwq" Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.399181 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-p28vq" Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.702992 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-sfsc4" Feb 18 19:54:05 crc kubenswrapper[5007]: I0218 19:54:05.779766 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7f6db7f7d8-d772r" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.585909 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s55sv"] Feb 18 19:54:06 crc kubenswrapper[5007]: E0218 19:54:06.586845 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="extract-content" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.586865 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="extract-content" Feb 18 19:54:06 crc kubenswrapper[5007]: E0218 19:54:06.586899 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="extract-utilities" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.586907 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="extract-utilities" Feb 18 19:54:06 crc kubenswrapper[5007]: E0218 19:54:06.586923 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="registry-server" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.586932 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="registry-server" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.587102 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfaf3750-e081-4424-aec5-4fee304984bb" containerName="registry-server" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.588411 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.618660 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s55sv"] Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.713751 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-utilities\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.713838 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-catalog-content\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.713912 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztsmk\" (UniqueName: \"kubernetes.io/projected/feff417b-0619-44f9-8239-609d1989c29c-kube-api-access-ztsmk\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.813404 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l4tvp" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.815058 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-catalog-content\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.815096 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztsmk\" (UniqueName: \"kubernetes.io/projected/feff417b-0619-44f9-8239-609d1989c29c-kube-api-access-ztsmk\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.815195 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-utilities\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.815560 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-catalog-content\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.815608 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-utilities\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.834950 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztsmk\" (UniqueName: \"kubernetes.io/projected/feff417b-0619-44f9-8239-609d1989c29c-kube-api-access-ztsmk\") pod \"community-operators-s55sv\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:06 crc kubenswrapper[5007]: I0218 19:54:06.916281 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:07 crc kubenswrapper[5007]: I0218 19:54:07.350910 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ccsdcr" Feb 18 19:54:07 crc kubenswrapper[5007]: I0218 19:54:07.403482 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s55sv"] Feb 18 19:54:07 crc kubenswrapper[5007]: I0218 19:54:07.621572 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-dd6d5466c-r8s8q" Feb 18 19:54:07 crc kubenswrapper[5007]: I0218 19:54:07.946419 5007 generic.go:334] "Generic (PLEG): container finished" podID="feff417b-0619-44f9-8239-609d1989c29c" containerID="8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10" exitCode=0 Feb 18 19:54:07 crc kubenswrapper[5007]: I0218 19:54:07.946520 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s55sv" event={"ID":"feff417b-0619-44f9-8239-609d1989c29c","Type":"ContainerDied","Data":"8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10"} Feb 18 19:54:07 crc kubenswrapper[5007]: I0218 19:54:07.946593 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s55sv" event={"ID":"feff417b-0619-44f9-8239-609d1989c29c","Type":"ContainerStarted","Data":"9091d018d7aa15cc059dfec275073cc0e7ca918dedb1fa01446cf509f0955f70"} Feb 18 19:54:08 crc kubenswrapper[5007]: I0218 19:54:08.957503 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s55sv" event={"ID":"feff417b-0619-44f9-8239-609d1989c29c","Type":"ContainerStarted","Data":"5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280"} Feb 18 19:54:09 crc kubenswrapper[5007]: I0218 19:54:09.853965 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:54:09 crc kubenswrapper[5007]: I0218 19:54:09.854502 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:54:09 crc kubenswrapper[5007]: I0218 19:54:09.949900 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:54:09 crc kubenswrapper[5007]: I0218 19:54:09.980335 5007 generic.go:334] "Generic (PLEG): container finished" podID="feff417b-0619-44f9-8239-609d1989c29c" containerID="5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280" exitCode=0 Feb 18 19:54:09 crc kubenswrapper[5007]: I0218 19:54:09.980499 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s55sv" event={"ID":"feff417b-0619-44f9-8239-609d1989c29c","Type":"ContainerDied","Data":"5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280"} Feb 18 19:54:10 crc kubenswrapper[5007]: I0218 19:54:10.053789 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:54:10 crc kubenswrapper[5007]: I0218 19:54:10.999551 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s55sv" event={"ID":"feff417b-0619-44f9-8239-609d1989c29c","Type":"ContainerStarted","Data":"9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307"} Feb 18 19:54:11 crc kubenswrapper[5007]: I0218 19:54:11.024926 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s55sv" podStartSLOduration=2.572579737 podStartE2EDuration="5.024911395s" podCreationTimestamp="2026-02-18 19:54:06 +0000 UTC" firstStartedPulling="2026-02-18 19:54:07.94800032 +0000 UTC m=+932.730119854" lastFinishedPulling="2026-02-18 19:54:10.400331988 +0000 UTC m=+935.182451512" observedRunningTime="2026-02-18 19:54:11.020994895 +0000 UTC m=+935.803114499" watchObservedRunningTime="2026-02-18 19:54:11.024911395 +0000 UTC m=+935.807030909" Feb 18 19:54:12 crc kubenswrapper[5007]: I0218 19:54:12.576994 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjl6z"] Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.014240 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sjl6z" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerName="registry-server" containerID="cri-o://27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f" gracePeriod=2 Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.391825 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.520647 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-utilities\") pod \"c5e40f93-0889-414e-aac3-710e9fbd2ade\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.520724 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8jr4\" (UniqueName: \"kubernetes.io/projected/c5e40f93-0889-414e-aac3-710e9fbd2ade-kube-api-access-w8jr4\") pod \"c5e40f93-0889-414e-aac3-710e9fbd2ade\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.520761 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-catalog-content\") pod \"c5e40f93-0889-414e-aac3-710e9fbd2ade\" (UID: \"c5e40f93-0889-414e-aac3-710e9fbd2ade\") " Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.522744 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-utilities" (OuterVolumeSpecName: "utilities") pod "c5e40f93-0889-414e-aac3-710e9fbd2ade" (UID: "c5e40f93-0889-414e-aac3-710e9fbd2ade"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.526591 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e40f93-0889-414e-aac3-710e9fbd2ade-kube-api-access-w8jr4" (OuterVolumeSpecName: "kube-api-access-w8jr4") pod "c5e40f93-0889-414e-aac3-710e9fbd2ade" (UID: "c5e40f93-0889-414e-aac3-710e9fbd2ade"). InnerVolumeSpecName "kube-api-access-w8jr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.559314 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5e40f93-0889-414e-aac3-710e9fbd2ade" (UID: "c5e40f93-0889-414e-aac3-710e9fbd2ade"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.622956 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.623030 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8jr4\" (UniqueName: \"kubernetes.io/projected/c5e40f93-0889-414e-aac3-710e9fbd2ade-kube-api-access-w8jr4\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:13 crc kubenswrapper[5007]: I0218 19:54:13.623065 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e40f93-0889-414e-aac3-710e9fbd2ade-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.024338 5007 generic.go:334] "Generic (PLEG): container finished" podID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerID="27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f" exitCode=0 Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.024385 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjl6z" event={"ID":"c5e40f93-0889-414e-aac3-710e9fbd2ade","Type":"ContainerDied","Data":"27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f"} Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.024471 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjl6z" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.024717 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjl6z" event={"ID":"c5e40f93-0889-414e-aac3-710e9fbd2ade","Type":"ContainerDied","Data":"f810659b1a1d9d6b6380b6342b2185f02a2d89b1b0032da5c554040afd5e6d4d"} Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.024734 5007 scope.go:117] "RemoveContainer" containerID="27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.052734 5007 scope.go:117] "RemoveContainer" containerID="efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.054695 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjl6z"] Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.068627 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjl6z"] Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.076007 5007 scope.go:117] "RemoveContainer" containerID="520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.099959 5007 scope.go:117] "RemoveContainer" containerID="27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f" Feb 18 19:54:14 crc kubenswrapper[5007]: E0218 19:54:14.100518 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f\": container with ID starting with 27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f not found: ID does not exist" containerID="27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.100588 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f"} err="failed to get container status \"27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f\": rpc error: code = NotFound desc = could not find container \"27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f\": container with ID starting with 27ae6f00935492b844a8e312106d4e0aa71afbc6c84c300b44880e349585559f not found: ID does not exist" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.100631 5007 scope.go:117] "RemoveContainer" containerID="efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e" Feb 18 19:54:14 crc kubenswrapper[5007]: E0218 19:54:14.101236 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e\": container with ID starting with efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e not found: ID does not exist" containerID="efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.101273 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e"} err="failed to get container status \"efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e\": rpc error: code = NotFound desc = could not find container \"efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e\": container with ID starting with efc9bb13e149ddc68c9145a704449e0d108c3357a06e0d519fdd511e9553286e not found: ID does not exist" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.101294 5007 scope.go:117] "RemoveContainer" containerID="520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c" Feb 18 19:54:14 crc kubenswrapper[5007]: E0218 19:54:14.101591 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c\": container with ID starting with 520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c not found: ID does not exist" containerID="520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.101624 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c"} err="failed to get container status \"520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c\": rpc error: code = NotFound desc = could not find container \"520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c\": container with ID starting with 520ae82a149926053ea16819d351ce62beb123bec6ff59c92f2d272dfc5b355c not found: ID does not exist" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.261404 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:54:14 crc kubenswrapper[5007]: I0218 19:54:14.305907 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:54:15 crc kubenswrapper[5007]: I0218 19:54:15.664059 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:54:15 crc kubenswrapper[5007]: I0218 19:54:15.664541 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:54:15 crc kubenswrapper[5007]: I0218 19:54:15.923132 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" path="/var/lib/kubelet/pods/c5e40f93-0889-414e-aac3-710e9fbd2ade/volumes" Feb 18 19:54:16 crc kubenswrapper[5007]: I0218 19:54:16.916509 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:16 crc kubenswrapper[5007]: I0218 19:54:16.917697 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:16 crc kubenswrapper[5007]: I0218 19:54:16.981328 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rddb"] Feb 18 19:54:16 crc kubenswrapper[5007]: I0218 19:54:16.981725 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8rddb" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="registry-server" containerID="cri-o://b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4" gracePeriod=2 Feb 18 19:54:16 crc kubenswrapper[5007]: I0218 19:54:16.996103 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.120213 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.404838 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.482367 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-catalog-content\") pod \"c44790e5-a554-4ae5-80a7-c3e785dd5918\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.482629 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-utilities\") pod \"c44790e5-a554-4ae5-80a7-c3e785dd5918\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.482654 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66p2m\" (UniqueName: \"kubernetes.io/projected/c44790e5-a554-4ae5-80a7-c3e785dd5918-kube-api-access-66p2m\") pod \"c44790e5-a554-4ae5-80a7-c3e785dd5918\" (UID: \"c44790e5-a554-4ae5-80a7-c3e785dd5918\") " Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.483591 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-utilities" (OuterVolumeSpecName: "utilities") pod "c44790e5-a554-4ae5-80a7-c3e785dd5918" (UID: "c44790e5-a554-4ae5-80a7-c3e785dd5918"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.491880 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44790e5-a554-4ae5-80a7-c3e785dd5918-kube-api-access-66p2m" (OuterVolumeSpecName: "kube-api-access-66p2m") pod "c44790e5-a554-4ae5-80a7-c3e785dd5918" (UID: "c44790e5-a554-4ae5-80a7-c3e785dd5918"). InnerVolumeSpecName "kube-api-access-66p2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.585932 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.585967 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66p2m\" (UniqueName: \"kubernetes.io/projected/c44790e5-a554-4ae5-80a7-c3e785dd5918-kube-api-access-66p2m\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.607631 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c44790e5-a554-4ae5-80a7-c3e785dd5918" (UID: "c44790e5-a554-4ae5-80a7-c3e785dd5918"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:17 crc kubenswrapper[5007]: I0218 19:54:17.687319 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44790e5-a554-4ae5-80a7-c3e785dd5918-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.081540 5007 generic.go:334] "Generic (PLEG): container finished" podID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerID="b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4" exitCode=0 Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.085698 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rddb" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.086760 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rddb" event={"ID":"c44790e5-a554-4ae5-80a7-c3e785dd5918","Type":"ContainerDied","Data":"b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4"} Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.086866 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rddb" event={"ID":"c44790e5-a554-4ae5-80a7-c3e785dd5918","Type":"ContainerDied","Data":"11f0f56e174ac3a611ffe1e00b7c044ed9aa48d1800a78fde978a840c39f4fe0"} Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.086912 5007 scope.go:117] "RemoveContainer" containerID="b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.116910 5007 scope.go:117] "RemoveContainer" containerID="f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.126473 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rddb"] Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.132617 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8rddb"] Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.156499 5007 scope.go:117] "RemoveContainer" containerID="bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.185985 5007 scope.go:117] "RemoveContainer" containerID="b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4" Feb 18 19:54:18 crc kubenswrapper[5007]: E0218 19:54:18.186608 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4\": container with ID starting with b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4 not found: ID does not exist" containerID="b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.186671 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4"} err="failed to get container status \"b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4\": rpc error: code = NotFound desc = could not find container \"b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4\": container with ID starting with b3bd24547b663c766b741c6d1d4a8e4e41216672710e5b357baa8eed7a02b5a4 not found: ID does not exist" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.186706 5007 scope.go:117] "RemoveContainer" containerID="f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424" Feb 18 19:54:18 crc kubenswrapper[5007]: E0218 19:54:18.187079 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424\": container with ID starting with f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424 not found: ID does not exist" containerID="f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.187107 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424"} err="failed to get container status \"f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424\": rpc error: code = NotFound desc = could not find container \"f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424\": container with ID starting with f7c8ee18e32ff65a4332ea8c2568087c08a52d39584b922071798a1297579424 not found: ID does not exist" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.187119 5007 scope.go:117] "RemoveContainer" containerID="bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d" Feb 18 19:54:18 crc kubenswrapper[5007]: E0218 19:54:18.187573 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d\": container with ID starting with bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d not found: ID does not exist" containerID="bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d" Feb 18 19:54:18 crc kubenswrapper[5007]: I0218 19:54:18.187611 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d"} err="failed to get container status \"bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d\": rpc error: code = NotFound desc = could not find container \"bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d\": container with ID starting with bee816a2fb34cc6a80f806e876a1c962f2af64d76af289ade45e770f8dcdc69d not found: ID does not exist" Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.378116 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s55sv"] Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.378606 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s55sv" podUID="feff417b-0619-44f9-8239-609d1989c29c" containerName="registry-server" containerID="cri-o://9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307" gracePeriod=2 Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.811754 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.919589 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" path="/var/lib/kubelet/pods/c44790e5-a554-4ae5-80a7-c3e785dd5918/volumes" Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.927356 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-utilities\") pod \"feff417b-0619-44f9-8239-609d1989c29c\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.927471 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztsmk\" (UniqueName: \"kubernetes.io/projected/feff417b-0619-44f9-8239-609d1989c29c-kube-api-access-ztsmk\") pod \"feff417b-0619-44f9-8239-609d1989c29c\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.927509 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-catalog-content\") pod \"feff417b-0619-44f9-8239-609d1989c29c\" (UID: \"feff417b-0619-44f9-8239-609d1989c29c\") " Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.928787 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-utilities" (OuterVolumeSpecName: "utilities") pod "feff417b-0619-44f9-8239-609d1989c29c" (UID: "feff417b-0619-44f9-8239-609d1989c29c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[5007]: I0218 19:54:19.944549 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feff417b-0619-44f9-8239-609d1989c29c-kube-api-access-ztsmk" (OuterVolumeSpecName: "kube-api-access-ztsmk") pod "feff417b-0619-44f9-8239-609d1989c29c" (UID: "feff417b-0619-44f9-8239-609d1989c29c"). InnerVolumeSpecName "kube-api-access-ztsmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.014761 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feff417b-0619-44f9-8239-609d1989c29c" (UID: "feff417b-0619-44f9-8239-609d1989c29c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.029569 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.029622 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztsmk\" (UniqueName: \"kubernetes.io/projected/feff417b-0619-44f9-8239-609d1989c29c-kube-api-access-ztsmk\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.029647 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feff417b-0619-44f9-8239-609d1989c29c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.103794 5007 generic.go:334] "Generic (PLEG): container finished" podID="feff417b-0619-44f9-8239-609d1989c29c" containerID="9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307" exitCode=0 Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.103848 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s55sv" event={"ID":"feff417b-0619-44f9-8239-609d1989c29c","Type":"ContainerDied","Data":"9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307"} Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.103894 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s55sv" event={"ID":"feff417b-0619-44f9-8239-609d1989c29c","Type":"ContainerDied","Data":"9091d018d7aa15cc059dfec275073cc0e7ca918dedb1fa01446cf509f0955f70"} Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.103949 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s55sv" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.103945 5007 scope.go:117] "RemoveContainer" containerID="9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.131405 5007 scope.go:117] "RemoveContainer" containerID="5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.155941 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s55sv"] Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.166104 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s55sv"] Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.187657 5007 scope.go:117] "RemoveContainer" containerID="8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.209284 5007 scope.go:117] "RemoveContainer" containerID="9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307" Feb 18 19:54:20 crc kubenswrapper[5007]: E0218 19:54:20.209973 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307\": container with ID starting with 9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307 not found: ID does not exist" containerID="9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.210037 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307"} err="failed to get container status \"9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307\": rpc error: code = NotFound desc = could not find container \"9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307\": container with ID starting with 9f36ac40ad4781df3c21b814fc12a3367d48c68e9a1aa27bf2b42645bba91307 not found: ID does not exist" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.210081 5007 scope.go:117] "RemoveContainer" containerID="5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280" Feb 18 19:54:20 crc kubenswrapper[5007]: E0218 19:54:20.210523 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280\": container with ID starting with 5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280 not found: ID does not exist" containerID="5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.210559 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280"} err="failed to get container status \"5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280\": rpc error: code = NotFound desc = could not find container \"5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280\": container with ID starting with 5a7e2b821f467b1442709d0d666e3ba8c77dd8e3dfbd317efa8a984d162c4280 not found: ID does not exist" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.210578 5007 scope.go:117] "RemoveContainer" containerID="8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10" Feb 18 19:54:20 crc kubenswrapper[5007]: E0218 19:54:20.210956 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10\": container with ID starting with 8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10 not found: ID does not exist" containerID="8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10" Feb 18 19:54:20 crc kubenswrapper[5007]: I0218 19:54:20.210980 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10"} err="failed to get container status \"8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10\": rpc error: code = NotFound desc = could not find container \"8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10\": container with ID starting with 8af2b23af0d33b21740046370cc0c1ea9836d8a12aff985af4f13679329a0c10 not found: ID does not exist" Feb 18 19:54:21 crc kubenswrapper[5007]: I0218 19:54:21.927101 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feff417b-0619-44f9-8239-609d1989c29c" path="/var/lib/kubelet/pods/feff417b-0619-44f9-8239-609d1989c29c/volumes" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.157386 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-dwh9w"] Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.159797 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feff417b-0619-44f9-8239-609d1989c29c" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.159816 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="feff417b-0619-44f9-8239-609d1989c29c" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.159851 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feff417b-0619-44f9-8239-609d1989c29c" containerName="extract-content" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.159861 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="feff417b-0619-44f9-8239-609d1989c29c" containerName="extract-content" Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.159879 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.159887 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.159895 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerName="extract-utilities" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.159903 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerName="extract-utilities" Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.159931 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.159939 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.159952 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerName="extract-content" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.159960 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerName="extract-content" Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.159974 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="extract-utilities" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.159982 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="extract-utilities" Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.160018 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="extract-content" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.160027 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="extract-content" Feb 18 19:54:29 crc kubenswrapper[5007]: E0218 19:54:29.160050 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feff417b-0619-44f9-8239-609d1989c29c" containerName="extract-utilities" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.160058 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="feff417b-0619-44f9-8239-609d1989c29c" containerName="extract-utilities" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.160288 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e40f93-0889-414e-aac3-710e9fbd2ade" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.160327 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44790e5-a554-4ae5-80a7-c3e785dd5918" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.160341 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="feff417b-0619-44f9-8239-609d1989c29c" containerName="registry-server" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.161599 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.164764 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.164823 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.164966 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.165143 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nqrph" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.172130 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-dwh9w"] Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.228618 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-8qp92"] Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.229985 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.232614 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.272158 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2tmb\" (UniqueName: \"kubernetes.io/projected/1b1ffc7a-47d3-4119-a38b-618b01856cab-kube-api-access-v2tmb\") pod \"dnsmasq-dns-78ff9dfd65-dwh9w\" (UID: \"1b1ffc7a-47d3-4119-a38b-618b01856cab\") " pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.272207 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.272282 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnk76\" (UniqueName: \"kubernetes.io/projected/e93eae86-9b71-4a99-a981-e1dd6c2a028d-kube-api-access-bnk76\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.272303 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-config\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.272319 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ffc7a-47d3-4119-a38b-618b01856cab-config\") pod \"dnsmasq-dns-78ff9dfd65-dwh9w\" (UID: \"1b1ffc7a-47d3-4119-a38b-618b01856cab\") " pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.278030 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-8qp92"] Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.373759 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnk76\" (UniqueName: \"kubernetes.io/projected/e93eae86-9b71-4a99-a981-e1dd6c2a028d-kube-api-access-bnk76\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.373810 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-config\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.373837 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ffc7a-47d3-4119-a38b-618b01856cab-config\") pod \"dnsmasq-dns-78ff9dfd65-dwh9w\" (UID: \"1b1ffc7a-47d3-4119-a38b-618b01856cab\") " pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.373922 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2tmb\" (UniqueName: \"kubernetes.io/projected/1b1ffc7a-47d3-4119-a38b-618b01856cab-kube-api-access-v2tmb\") pod \"dnsmasq-dns-78ff9dfd65-dwh9w\" (UID: \"1b1ffc7a-47d3-4119-a38b-618b01856cab\") " pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.373955 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.374808 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-config\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.374881 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ffc7a-47d3-4119-a38b-618b01856cab-config\") pod \"dnsmasq-dns-78ff9dfd65-dwh9w\" (UID: \"1b1ffc7a-47d3-4119-a38b-618b01856cab\") " pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.375007 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.394885 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnk76\" (UniqueName: \"kubernetes.io/projected/e93eae86-9b71-4a99-a981-e1dd6c2a028d-kube-api-access-bnk76\") pod \"dnsmasq-dns-574fdb7f99-8qp92\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.395655 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2tmb\" (UniqueName: \"kubernetes.io/projected/1b1ffc7a-47d3-4119-a38b-618b01856cab-kube-api-access-v2tmb\") pod \"dnsmasq-dns-78ff9dfd65-dwh9w\" (UID: \"1b1ffc7a-47d3-4119-a38b-618b01856cab\") " pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.485476 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.549354 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.835551 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-8qp92"] Feb 18 19:54:29 crc kubenswrapper[5007]: I0218 19:54:29.992699 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-dwh9w"] Feb 18 19:54:29 crc kubenswrapper[5007]: W0218 19:54:29.997025 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b1ffc7a_47d3_4119_a38b_618b01856cab.slice/crio-b0387cd370ab025c6e67adf8938d4f806fbc12dccfbeb6dfc2428b935240947d WatchSource:0}: Error finding container b0387cd370ab025c6e67adf8938d4f806fbc12dccfbeb6dfc2428b935240947d: Status 404 returned error can't find the container with id b0387cd370ab025c6e67adf8938d4f806fbc12dccfbeb6dfc2428b935240947d Feb 18 19:54:30 crc kubenswrapper[5007]: I0218 19:54:30.201223 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" event={"ID":"1b1ffc7a-47d3-4119-a38b-618b01856cab","Type":"ContainerStarted","Data":"b0387cd370ab025c6e67adf8938d4f806fbc12dccfbeb6dfc2428b935240947d"} Feb 18 19:54:30 crc kubenswrapper[5007]: I0218 19:54:30.202875 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" event={"ID":"e93eae86-9b71-4a99-a981-e1dd6c2a028d","Type":"ContainerStarted","Data":"ff3a6d5c978e9e20967a800d360423974f62a1ae85be6fa70a66b0bb4d6e8ab0"} Feb 18 19:54:32 crc kubenswrapper[5007]: I0218 19:54:32.825230 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-8qp92"] Feb 18 19:54:32 crc kubenswrapper[5007]: I0218 19:54:32.848577 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d96cb4cc5-hgz8p"] Feb 18 19:54:32 crc kubenswrapper[5007]: I0218 19:54:32.852065 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:32 crc kubenswrapper[5007]: I0218 19:54:32.862406 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d96cb4cc5-hgz8p"] Feb 18 19:54:32 crc kubenswrapper[5007]: I0218 19:54:32.939241 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-dns-svc\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:32 crc kubenswrapper[5007]: I0218 19:54:32.939346 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gsl2\" (UniqueName: \"kubernetes.io/projected/8ec8b75e-b2fc-4d54-b528-4539856358b7-kube-api-access-7gsl2\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:32 crc kubenswrapper[5007]: I0218 19:54:32.939391 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-config\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.040740 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-dns-svc\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.040838 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gsl2\" (UniqueName: \"kubernetes.io/projected/8ec8b75e-b2fc-4d54-b528-4539856358b7-kube-api-access-7gsl2\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.040899 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-config\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.041673 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-dns-svc\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.041683 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-config\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.077326 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gsl2\" (UniqueName: \"kubernetes.io/projected/8ec8b75e-b2fc-4d54-b528-4539856358b7-kube-api-access-7gsl2\") pod \"dnsmasq-dns-5d96cb4cc5-hgz8p\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.121627 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-dwh9w"] Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.155134 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-766f65774c-c47q7"] Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.156999 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.167346 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.167803 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766f65774c-c47q7"] Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.243976 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-dns-svc\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.244044 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntmt\" (UniqueName: \"kubernetes.io/projected/debb8181-4165-44fd-83ed-0698c664676a-kube-api-access-bntmt\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.244143 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-config\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.349360 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bntmt\" (UniqueName: \"kubernetes.io/projected/debb8181-4165-44fd-83ed-0698c664676a-kube-api-access-bntmt\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.349419 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-config\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.349519 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-dns-svc\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.350365 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-config\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.350444 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-dns-svc\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.367604 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntmt\" (UniqueName: \"kubernetes.io/projected/debb8181-4165-44fd-83ed-0698c664676a-kube-api-access-bntmt\") pod \"dnsmasq-dns-766f65774c-c47q7\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.475061 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.486457 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d96cb4cc5-hgz8p"] Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.519360 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-2cxpn"] Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.520400 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.538196 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-2cxpn"] Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.553422 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-dns-svc\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.553556 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-config\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.553655 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w6rb\" (UniqueName: \"kubernetes.io/projected/a2a9bc90-43bb-45f9-8c66-f39508e55213-kube-api-access-7w6rb\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.655134 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-dns-svc\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.655210 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-config\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.655287 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w6rb\" (UniqueName: \"kubernetes.io/projected/a2a9bc90-43bb-45f9-8c66-f39508e55213-kube-api-access-7w6rb\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.656182 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-dns-svc\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.656520 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-config\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.672299 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w6rb\" (UniqueName: \"kubernetes.io/projected/a2a9bc90-43bb-45f9-8c66-f39508e55213-kube-api-access-7w6rb\") pod \"dnsmasq-dns-7587d7df99-2cxpn\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.836815 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:54:33 crc kubenswrapper[5007]: I0218 19:54:33.998723 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.006009 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.011756 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.012193 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.012321 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.012454 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.012613 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.012667 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-2klmf" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.012772 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.018551 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068139 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c7639d7-1f64-49c0-981d-c6231836c6d4-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068205 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068242 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068263 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068286 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068307 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068347 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bc4w\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-kube-api-access-5bc4w\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068368 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068386 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068411 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.068444 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c7639d7-1f64-49c0-981d-c6231836c6d4-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170188 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170247 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170273 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170295 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170339 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bc4w\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-kube-api-access-5bc4w\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170366 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170381 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170406 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170452 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c7639d7-1f64-49c0-981d-c6231836c6d4-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170683 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c7639d7-1f64-49c0-981d-c6231836c6d4-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.170710 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.171000 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.171072 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.171422 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.171446 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.172004 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c7639d7-1f64-49c0-981d-c6231836c6d4-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.173926 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.174919 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c7639d7-1f64-49c0-981d-c6231836c6d4-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.175215 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.175679 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.183968 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c7639d7-1f64-49c0-981d-c6231836c6d4-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.187850 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bc4w\" (UniqueName: \"kubernetes.io/projected/9c7639d7-1f64-49c0-981d-c6231836c6d4-kube-api-access-5bc4w\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.190764 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"9c7639d7-1f64-49c0-981d-c6231836c6d4\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.305931 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.307074 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.309572 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.310297 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nnmpd" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.310691 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.313779 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.313967 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.314115 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.314635 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.314807 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.321944 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.375886 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.376231 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.376598 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.376784 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e34afaa-d033-4975-9f1f-e81ab6b743a9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.376983 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e34afaa-d033-4975-9f1f-e81ab6b743a9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.377181 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.377359 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.377548 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.377722 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.377887 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b8p5\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-kube-api-access-5b8p5\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.378070 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.479909 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.479973 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480008 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480045 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480069 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b8p5\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-kube-api-access-5b8p5\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480109 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480138 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480164 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480186 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480224 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e34afaa-d033-4975-9f1f-e81ab6b743a9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.480261 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e34afaa-d033-4975-9f1f-e81ab6b743a9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.481035 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.481890 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.481910 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.483119 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.483274 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.483367 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.491319 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e34afaa-d033-4975-9f1f-e81ab6b743a9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.491554 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.492381 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e34afaa-d033-4975-9f1f-e81ab6b743a9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.495850 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.500810 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b8p5\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-kube-api-access-5b8p5\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.508696 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.628525 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.630453 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.632213 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.635045 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h4sq6" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.635392 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.635676 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.635908 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.636792 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.637705 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.638190 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.649597 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683056 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683116 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683138 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48869d89-959c-4f88-b8cb-1eb83e7fce30-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683157 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683197 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvc2m\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-kube-api-access-tvc2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683239 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683283 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683316 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683345 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683372 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48869d89-959c-4f88-b8cb-1eb83e7fce30-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.683387 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.785865 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.785977 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786011 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786034 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48869d89-959c-4f88-b8cb-1eb83e7fce30-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786089 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc2m\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-kube-api-access-tvc2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786137 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786191 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786229 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786270 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786306 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48869d89-959c-4f88-b8cb-1eb83e7fce30-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786338 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.786856 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.787067 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.787304 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.787734 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.787923 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.788553 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.795211 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.795410 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.797396 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48869d89-959c-4f88-b8cb-1eb83e7fce30-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.808267 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48869d89-959c-4f88-b8cb-1eb83e7fce30-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.828936 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.836809 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc2m\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-kube-api-access-tvc2m\") pod \"rabbitmq-cell1-server-0\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:34 crc kubenswrapper[5007]: I0218 19:54:34.949082 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:54:35 crc kubenswrapper[5007]: I0218 19:54:35.980359 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:54:35 crc kubenswrapper[5007]: I0218 19:54:35.984344 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:54:35 crc kubenswrapper[5007]: I0218 19:54:35.989902 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 19:54:35 crc kubenswrapper[5007]: I0218 19:54:35.992288 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 19:54:35 crc kubenswrapper[5007]: I0218 19:54:35.992510 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 19:54:35 crc kubenswrapper[5007]: I0218 19:54:35.992702 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qc7xq" Feb 18 19:54:35 crc kubenswrapper[5007]: I0218 19:54:35.996122 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.007107 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.110259 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.110318 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.110358 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-config-data-default\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.110388 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.110460 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-kolla-config\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.110487 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25xfg\" (UniqueName: \"kubernetes.io/projected/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-kube-api-access-25xfg\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.110514 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.110530 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.211705 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-config-data-default\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.211759 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.211807 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-kolla-config\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.211835 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25xfg\" (UniqueName: \"kubernetes.io/projected/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-kube-api-access-25xfg\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.211871 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.211895 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.211925 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.211956 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.212900 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.213131 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-config-data-default\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.213347 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.213366 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.213860 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-kolla-config\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.217537 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.217584 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.233126 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.240417 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25xfg\" (UniqueName: \"kubernetes.io/projected/d0e8a83e-b230-482d-bcf9-d2d8f5d474e4-kube-api-access-25xfg\") pod \"openstack-galera-0\" (UID: \"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4\") " pod="openstack/openstack-galera-0" Feb 18 19:54:36 crc kubenswrapper[5007]: I0218 19:54:36.321884 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.112474 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.113786 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.118963 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.119238 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.119298 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.119407 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zwn5s" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.126554 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.228960 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.229018 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.229038 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcx26\" (UniqueName: \"kubernetes.io/projected/43aa0db4-e558-4914-9ef5-249b2c6f4328-kube-api-access-dcx26\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.229063 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43aa0db4-e558-4914-9ef5-249b2c6f4328-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.229085 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.229120 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43aa0db4-e558-4914-9ef5-249b2c6f4328-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.229140 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.229172 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43aa0db4-e558-4914-9ef5-249b2c6f4328-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.330797 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43aa0db4-e558-4914-9ef5-249b2c6f4328-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.330856 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.330913 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43aa0db4-e558-4914-9ef5-249b2c6f4328-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.330986 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.331022 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.331041 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcx26\" (UniqueName: \"kubernetes.io/projected/43aa0db4-e558-4914-9ef5-249b2c6f4328-kube-api-access-dcx26\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.331066 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43aa0db4-e558-4914-9ef5-249b2c6f4328-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.331095 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.331808 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43aa0db4-e558-4914-9ef5-249b2c6f4328-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.332269 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.332287 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.335638 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.337594 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43aa0db4-e558-4914-9ef5-249b2c6f4328-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.339351 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43aa0db4-e558-4914-9ef5-249b2c6f4328-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.340202 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43aa0db4-e558-4914-9ef5-249b2c6f4328-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.357282 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.364984 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcx26\" (UniqueName: \"kubernetes.io/projected/43aa0db4-e558-4914-9ef5-249b2c6f4328-kube-api-access-dcx26\") pod \"openstack-cell1-galera-0\" (UID: \"43aa0db4-e558-4914-9ef5-249b2c6f4328\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.449303 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.574493 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.575658 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.578250 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.578518 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.587241 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-pkph7" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.589117 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.738337 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53311c89-f482-4136-beab-69e0c69ed6e3-kolla-config\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.738475 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7zxt\" (UniqueName: \"kubernetes.io/projected/53311c89-f482-4136-beab-69e0c69ed6e3-kube-api-access-l7zxt\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.738533 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53311c89-f482-4136-beab-69e0c69ed6e3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.738568 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/53311c89-f482-4136-beab-69e0c69ed6e3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.738592 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53311c89-f482-4136-beab-69e0c69ed6e3-config-data\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.840692 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7zxt\" (UniqueName: \"kubernetes.io/projected/53311c89-f482-4136-beab-69e0c69ed6e3-kube-api-access-l7zxt\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.840812 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53311c89-f482-4136-beab-69e0c69ed6e3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.840859 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/53311c89-f482-4136-beab-69e0c69ed6e3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.840896 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53311c89-f482-4136-beab-69e0c69ed6e3-config-data\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.840985 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53311c89-f482-4136-beab-69e0c69ed6e3-kolla-config\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.843537 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53311c89-f482-4136-beab-69e0c69ed6e3-config-data\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.844398 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/53311c89-f482-4136-beab-69e0c69ed6e3-kolla-config\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.846772 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53311c89-f482-4136-beab-69e0c69ed6e3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.847518 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/53311c89-f482-4136-beab-69e0c69ed6e3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.865548 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7zxt\" (UniqueName: \"kubernetes.io/projected/53311c89-f482-4136-beab-69e0c69ed6e3-kube-api-access-l7zxt\") pod \"memcached-0\" (UID: \"53311c89-f482-4136-beab-69e0c69ed6e3\") " pod="openstack/memcached-0" Feb 18 19:54:37 crc kubenswrapper[5007]: I0218 19:54:37.905968 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:54:39 crc kubenswrapper[5007]: I0218 19:54:39.797154 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:54:39 crc kubenswrapper[5007]: I0218 19:54:39.798244 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:54:39 crc kubenswrapper[5007]: I0218 19:54:39.801578 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-x5xsk" Feb 18 19:54:39 crc kubenswrapper[5007]: I0218 19:54:39.812736 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:54:39 crc kubenswrapper[5007]: I0218 19:54:39.970621 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttp8t\" (UniqueName: \"kubernetes.io/projected/9e731197-939b-4501-8baf-54690037a9ca-kube-api-access-ttp8t\") pod \"kube-state-metrics-0\" (UID: \"9e731197-939b-4501-8baf-54690037a9ca\") " pod="openstack/kube-state-metrics-0" Feb 18 19:54:40 crc kubenswrapper[5007]: I0218 19:54:40.072298 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttp8t\" (UniqueName: \"kubernetes.io/projected/9e731197-939b-4501-8baf-54690037a9ca-kube-api-access-ttp8t\") pod \"kube-state-metrics-0\" (UID: \"9e731197-939b-4501-8baf-54690037a9ca\") " pod="openstack/kube-state-metrics-0" Feb 18 19:54:40 crc kubenswrapper[5007]: I0218 19:54:40.121641 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttp8t\" (UniqueName: \"kubernetes.io/projected/9e731197-939b-4501-8baf-54690037a9ca-kube-api-access-ttp8t\") pod \"kube-state-metrics-0\" (UID: \"9e731197-939b-4501-8baf-54690037a9ca\") " pod="openstack/kube-state-metrics-0" Feb 18 19:54:40 crc kubenswrapper[5007]: I0218 19:54:40.184093 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.142682 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.145849 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.148019 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.148527 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.148692 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.148756 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.148848 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4q7sz" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.150677 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.151440 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.153617 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.171885 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.291641 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.291686 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.291712 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d5c55070-9969-4f85-be20-230c847f2722-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.291745 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.291959 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.292090 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.292169 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-config\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.292223 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.292294 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5msm6\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-kube-api-access-5msm6\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.292350 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394351 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394398 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394422 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d5c55070-9969-4f85-be20-230c847f2722-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394471 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394512 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394539 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394561 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-config\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394584 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394606 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5msm6\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-kube-api-access-5msm6\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.394623 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.395407 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.396093 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.405147 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.405256 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.405390 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.408873 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d5c55070-9969-4f85-be20-230c847f2722-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.409236 5007 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.409285 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1309d09147a960927b84a66d0aaca53d5d9063fe1f3c754674268f7a120fbef0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.412015 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5msm6\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-kube-api-access-5msm6\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.413005 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-config\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.420047 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.450340 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:41 crc kubenswrapper[5007]: I0218 19:54:41.471850 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.879890 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7jbdh"] Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.881170 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.883034 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-v2qrq" Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.883088 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.886313 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.891668 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6tjjw"] Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.893244 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.903441 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7jbdh"] Feb 18 19:54:42 crc kubenswrapper[5007]: I0218 19:54:42.932857 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6tjjw"] Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022236 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb28594a-2fde-4b1e-8caa-0590add450bb-scripts\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022379 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42758d74-6ff7-4d38-960f-41e387147352-scripts\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022405 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-log\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022444 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-log-ovn\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022486 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-run\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022518 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf6vl\" (UniqueName: \"kubernetes.io/projected/bb28594a-2fde-4b1e-8caa-0590add450bb-kube-api-access-vf6vl\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022577 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/42758d74-6ff7-4d38-960f-41e387147352-ovn-controller-tls-certs\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022621 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-lib\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022643 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42758d74-6ff7-4d38-960f-41e387147352-combined-ca-bundle\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022668 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-run\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022731 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkggc\" (UniqueName: \"kubernetes.io/projected/42758d74-6ff7-4d38-960f-41e387147352-kube-api-access-wkggc\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022796 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-run-ovn\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.022819 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-etc-ovs\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.124837 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42758d74-6ff7-4d38-960f-41e387147352-scripts\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131534 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-log\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131573 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-log-ovn\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131618 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-run\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131674 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf6vl\" (UniqueName: \"kubernetes.io/projected/bb28594a-2fde-4b1e-8caa-0590add450bb-kube-api-access-vf6vl\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131718 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/42758d74-6ff7-4d38-960f-41e387147352-ovn-controller-tls-certs\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131782 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-lib\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131800 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42758d74-6ff7-4d38-960f-41e387147352-combined-ca-bundle\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131818 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-run\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131885 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkggc\" (UniqueName: \"kubernetes.io/projected/42758d74-6ff7-4d38-960f-41e387147352-kube-api-access-wkggc\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131923 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-run-ovn\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.131939 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-etc-ovs\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.132022 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb28594a-2fde-4b1e-8caa-0590add450bb-scripts\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.133873 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb28594a-2fde-4b1e-8caa-0590add450bb-scripts\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.129639 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42758d74-6ff7-4d38-960f-41e387147352-scripts\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.134279 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-log\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.134360 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-log-ovn\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.134472 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-run\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.136620 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-run\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.136856 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-var-lib\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.141111 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bb28594a-2fde-4b1e-8caa-0590add450bb-etc-ovs\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.142456 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/42758d74-6ff7-4d38-960f-41e387147352-ovn-controller-tls-certs\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.147189 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42758d74-6ff7-4d38-960f-41e387147352-combined-ca-bundle\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.154608 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42758d74-6ff7-4d38-960f-41e387147352-var-run-ovn\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.155017 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf6vl\" (UniqueName: \"kubernetes.io/projected/bb28594a-2fde-4b1e-8caa-0590add450bb-kube-api-access-vf6vl\") pod \"ovn-controller-ovs-6tjjw\" (UID: \"bb28594a-2fde-4b1e-8caa-0590add450bb\") " pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.184333 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkggc\" (UniqueName: \"kubernetes.io/projected/42758d74-6ff7-4d38-960f-41e387147352-kube-api-access-wkggc\") pod \"ovn-controller-7jbdh\" (UID: \"42758d74-6ff7-4d38-960f-41e387147352\") " pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.212665 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7jbdh" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.220886 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.768627 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.770080 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.774134 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.774365 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-46qmk" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.774423 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.774813 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.774831 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.780640 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.947833 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.947944 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.947977 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.948010 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.948042 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-config\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.948093 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlhm5\" (UniqueName: \"kubernetes.io/projected/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-kube-api-access-tlhm5\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.948149 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:43 crc kubenswrapper[5007]: I0218 19:54:43.948217 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.049697 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlhm5\" (UniqueName: \"kubernetes.io/projected/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-kube-api-access-tlhm5\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.049766 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.049831 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.049865 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.049955 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.049981 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.050010 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.050042 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-config\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.051182 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-config\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.052301 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.053904 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.054150 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.055552 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.057956 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.066152 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.066893 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlhm5\" (UniqueName: \"kubernetes.io/projected/0de2ea3c-89f5-4df2-841e-904cebc9fb7d-kube-api-access-tlhm5\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.084675 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0de2ea3c-89f5-4df2-841e-904cebc9fb7d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:44 crc kubenswrapper[5007]: I0218 19:54:44.102838 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:54:45 crc kubenswrapper[5007]: I0218 19:54:45.664032 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:54:45 crc kubenswrapper[5007]: I0218 19:54:45.664279 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:54:45 crc kubenswrapper[5007]: I0218 19:54:45.664320 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:54:45 crc kubenswrapper[5007]: I0218 19:54:45.664972 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0761fcff0a059a787d7d4e6ecd66cfd18d646e97ccddeb6ea48208f9ee32bdee"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:54:45 crc kubenswrapper[5007]: I0218 19:54:45.665025 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://0761fcff0a059a787d7d4e6ecd66cfd18d646e97ccddeb6ea48208f9ee32bdee" gracePeriod=600 Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.330983 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="0761fcff0a059a787d7d4e6ecd66cfd18d646e97ccddeb6ea48208f9ee32bdee" exitCode=0 Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.331027 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"0761fcff0a059a787d7d4e6ecd66cfd18d646e97ccddeb6ea48208f9ee32bdee"} Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.331250 5007 scope.go:117] "RemoveContainer" containerID="10c20995e1da7316ed8d3f74629b0d4549ee655af8465574d8a6d0f66d48031f" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.774386 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.777138 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.787850 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.789602 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.790162 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-s9kfd" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.790423 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.793655 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.900095 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8656c9d3-1d71-4e5d-a00d-2031808281d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.900176 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.900214 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8656c9d3-1d71-4e5d-a00d-2031808281d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.900275 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.900321 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.900370 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8656c9d3-1d71-4e5d-a00d-2031808281d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.900401 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:46 crc kubenswrapper[5007]: I0218 19:54:46.900443 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs92t\" (UniqueName: \"kubernetes.io/projected/8656c9d3-1d71-4e5d-a00d-2031808281d3-kube-api-access-hs92t\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.001820 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.001885 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8656c9d3-1d71-4e5d-a00d-2031808281d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.001947 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.002635 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8656c9d3-1d71-4e5d-a00d-2031808281d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.002952 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs92t\" (UniqueName: \"kubernetes.io/projected/8656c9d3-1d71-4e5d-a00d-2031808281d3-kube-api-access-hs92t\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.003081 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8656c9d3-1d71-4e5d-a00d-2031808281d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.003175 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.003203 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8656c9d3-1d71-4e5d-a00d-2031808281d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.003298 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.004505 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.006789 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8656c9d3-1d71-4e5d-a00d-2031808281d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.007373 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8656c9d3-1d71-4e5d-a00d-2031808281d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.009164 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.017201 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs92t\" (UniqueName: \"kubernetes.io/projected/8656c9d3-1d71-4e5d-a00d-2031808281d3-kube-api-access-hs92t\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.020917 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.022625 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.024270 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8656c9d3-1d71-4e5d-a00d-2031808281d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8656c9d3-1d71-4e5d-a00d-2031808281d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.130484 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:54:47 crc kubenswrapper[5007]: I0218 19:54:47.224762 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:54:47 crc kubenswrapper[5007]: E0218 19:54:47.579122 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:54:47 crc kubenswrapper[5007]: E0218 19:54:47.579185 5007 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:54:47 crc kubenswrapper[5007]: E0218 19:54:47.579323 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2tmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78ff9dfd65-dwh9w_openstack(1b1ffc7a-47d3-4119-a38b-618b01856cab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:54:47 crc kubenswrapper[5007]: E0218 19:54:47.580509 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" podUID="1b1ffc7a-47d3-4119-a38b-618b01856cab" Feb 18 19:54:47 crc kubenswrapper[5007]: E0218 19:54:47.644758 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:54:47 crc kubenswrapper[5007]: E0218 19:54:47.645169 5007 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:54:47 crc kubenswrapper[5007]: E0218 19:54:47.645315 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnk76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-574fdb7f99-8qp92_openstack(e93eae86-9b71-4a99-a981-e1dd6c2a028d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:54:47 crc kubenswrapper[5007]: E0218 19:54:47.646598 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" podUID="e93eae86-9b71-4a99-a981-e1dd6c2a028d" Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.220023 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:54:48 crc kubenswrapper[5007]: W0218 19:54:48.240180 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43aa0db4_e558_4914_9ef5_249b2c6f4328.slice/crio-2f4c47decb2cd5c746b08b5ea3a92671b0336ca4ee9bb182629f3c91a9ed79fb WatchSource:0}: Error finding container 2f4c47decb2cd5c746b08b5ea3a92671b0336ca4ee9bb182629f3c91a9ed79fb: Status 404 returned error can't find the container with id 2f4c47decb2cd5c746b08b5ea3a92671b0336ca4ee9bb182629f3c91a9ed79fb Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.354238 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"9c7639d7-1f64-49c0-981d-c6231836c6d4","Type":"ContainerStarted","Data":"0864f941aa1e9d370cd64e3f0b782fc42ccd5f20e057ca9fd457ee122548fdca"} Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.368456 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"735004cd6ccf501f03eefba61a70e42cc068351a26665fe459f03c839d84882e"} Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.385679 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.390160 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43aa0db4-e558-4914-9ef5-249b2c6f4328","Type":"ContainerStarted","Data":"2f4c47decb2cd5c746b08b5ea3a92671b0336ca4ee9bb182629f3c91a9ed79fb"} Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.398506 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.423703 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-2cxpn"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.433303 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.790562 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d96cb4cc5-hgz8p"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.801207 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766f65774c-c47q7"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.816971 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.833193 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.838476 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7jbdh"] Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.929944 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.940579 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:48 crc kubenswrapper[5007]: I0218 19:54:48.965003 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:54:48 crc kubenswrapper[5007]: W0218 19:54:48.981719 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8656c9d3_1d71_4e5d_a00d_2031808281d3.slice/crio-e29cd7fcb493a272859ad1fd9362f53aa46aab20d0f53dcadda482f33782313b WatchSource:0}: Error finding container e29cd7fcb493a272859ad1fd9362f53aa46aab20d0f53dcadda482f33782313b: Status 404 returned error can't find the container with id e29cd7fcb493a272859ad1fd9362f53aa46aab20d0f53dcadda482f33782313b Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.043143 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-dns-svc\") pod \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.043865 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e93eae86-9b71-4a99-a981-e1dd6c2a028d" (UID: "e93eae86-9b71-4a99-a981-e1dd6c2a028d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.043893 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-config\") pod \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.043960 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2tmb\" (UniqueName: \"kubernetes.io/projected/1b1ffc7a-47d3-4119-a38b-618b01856cab-kube-api-access-v2tmb\") pod \"1b1ffc7a-47d3-4119-a38b-618b01856cab\" (UID: \"1b1ffc7a-47d3-4119-a38b-618b01856cab\") " Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.044065 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnk76\" (UniqueName: \"kubernetes.io/projected/e93eae86-9b71-4a99-a981-e1dd6c2a028d-kube-api-access-bnk76\") pod \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\" (UID: \"e93eae86-9b71-4a99-a981-e1dd6c2a028d\") " Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.044097 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ffc7a-47d3-4119-a38b-618b01856cab-config\") pod \"1b1ffc7a-47d3-4119-a38b-618b01856cab\" (UID: \"1b1ffc7a-47d3-4119-a38b-618b01856cab\") " Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.044622 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-config" (OuterVolumeSpecName: "config") pod "e93eae86-9b71-4a99-a981-e1dd6c2a028d" (UID: "e93eae86-9b71-4a99-a981-e1dd6c2a028d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.045361 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b1ffc7a-47d3-4119-a38b-618b01856cab-config" (OuterVolumeSpecName: "config") pod "1b1ffc7a-47d3-4119-a38b-618b01856cab" (UID: "1b1ffc7a-47d3-4119-a38b-618b01856cab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.045786 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ffc7a-47d3-4119-a38b-618b01856cab-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.045805 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.045817 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93eae86-9b71-4a99-a981-e1dd6c2a028d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.047923 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93eae86-9b71-4a99-a981-e1dd6c2a028d-kube-api-access-bnk76" (OuterVolumeSpecName: "kube-api-access-bnk76") pod "e93eae86-9b71-4a99-a981-e1dd6c2a028d" (UID: "e93eae86-9b71-4a99-a981-e1dd6c2a028d"). InnerVolumeSpecName "kube-api-access-bnk76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.049107 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1ffc7a-47d3-4119-a38b-618b01856cab-kube-api-access-v2tmb" (OuterVolumeSpecName: "kube-api-access-v2tmb") pod "1b1ffc7a-47d3-4119-a38b-618b01856cab" (UID: "1b1ffc7a-47d3-4119-a38b-618b01856cab"). InnerVolumeSpecName "kube-api-access-v2tmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.055528 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.128679 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.147085 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2tmb\" (UniqueName: \"kubernetes.io/projected/1b1ffc7a-47d3-4119-a38b-618b01856cab-kube-api-access-v2tmb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.147115 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnk76\" (UniqueName: \"kubernetes.io/projected/e93eae86-9b71-4a99-a981-e1dd6c2a028d-kube-api-access-bnk76\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.233031 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6tjjw"] Feb 18 19:54:49 crc kubenswrapper[5007]: W0218 19:54:49.275975 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb28594a_2fde_4b1e_8caa_0590add450bb.slice/crio-10333e218a7f7edaa1a55b275af14e6715fe904bb2cf1503c8060c9b44e5c03a WatchSource:0}: Error finding container 10333e218a7f7edaa1a55b275af14e6715fe904bb2cf1503c8060c9b44e5c03a: Status 404 returned error can't find the container with id 10333e218a7f7edaa1a55b275af14e6715fe904bb2cf1503c8060c9b44e5c03a Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.406622 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" event={"ID":"e93eae86-9b71-4a99-a981-e1dd6c2a028d","Type":"ContainerDied","Data":"ff3a6d5c978e9e20967a800d360423974f62a1ae85be6fa70a66b0bb4d6e8ab0"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.406633 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-8qp92" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.408529 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4","Type":"ContainerStarted","Data":"8763edb89b4adc0514abab6fc76bd91875293f5bf80c8e6c3208c31556afe468"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.410922 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerStarted","Data":"a82ac177195d530d98bc7c3ac50dc3bf416df61fcfa58a4cbdae4818e8c835f8"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.412522 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7jbdh" event={"ID":"42758d74-6ff7-4d38-960f-41e387147352","Type":"ContainerStarted","Data":"cd8909bf7e5a86c0e7d23a8a68a89dfa48a0a687e29271da55ba62307acd7693"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.414032 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9e731197-939b-4501-8baf-54690037a9ca","Type":"ContainerStarted","Data":"3c71e4d8c7e1465311792d3d19d898bc54141e72c7e5902e1dd5afedf607b594"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.415536 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0de2ea3c-89f5-4df2-841e-904cebc9fb7d","Type":"ContainerStarted","Data":"a9159828d41c9acae0969103f0ffe850cbb52c599f38da0dec56e123b9819ef5"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.416611 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48869d89-959c-4f88-b8cb-1eb83e7fce30","Type":"ContainerStarted","Data":"36a0b55dac1e28dec07db60df605f6e3dc266a9fcc1e8b64ea27c146971e7ec0"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.417831 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8656c9d3-1d71-4e5d-a00d-2031808281d3","Type":"ContainerStarted","Data":"e29cd7fcb493a272859ad1fd9362f53aa46aab20d0f53dcadda482f33782313b"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.419168 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e34afaa-d033-4975-9f1f-e81ab6b743a9","Type":"ContainerStarted","Data":"1fc1052cf9179ee58f8c09083bb2c5125974403b297a98eb6776390acb1936d5"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.421056 5007 generic.go:334] "Generic (PLEG): container finished" podID="a2a9bc90-43bb-45f9-8c66-f39508e55213" containerID="277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f" exitCode=0 Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.421108 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" event={"ID":"a2a9bc90-43bb-45f9-8c66-f39508e55213","Type":"ContainerDied","Data":"277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.421127 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" event={"ID":"a2a9bc90-43bb-45f9-8c66-f39508e55213","Type":"ContainerStarted","Data":"6219da3e3ab136fce13a2f91c2c35d61b63fb91b21dd12e1c08eb5e1db5da1d7"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.423219 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" event={"ID":"1b1ffc7a-47d3-4119-a38b-618b01856cab","Type":"ContainerDied","Data":"b0387cd370ab025c6e67adf8938d4f806fbc12dccfbeb6dfc2428b935240947d"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.423275 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-dwh9w" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.424707 5007 generic.go:334] "Generic (PLEG): container finished" podID="debb8181-4165-44fd-83ed-0698c664676a" containerID="bb1181fe8309f12924ae3409c60039f4cfa16b16402666d004b4453ec7324386" exitCode=0 Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.424747 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766f65774c-c47q7" event={"ID":"debb8181-4165-44fd-83ed-0698c664676a","Type":"ContainerDied","Data":"bb1181fe8309f12924ae3409c60039f4cfa16b16402666d004b4453ec7324386"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.424776 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766f65774c-c47q7" event={"ID":"debb8181-4165-44fd-83ed-0698c664676a","Type":"ContainerStarted","Data":"585ffc1310d040cf82ecea60329f0e007937231a724f313a407a772b91fed01c"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.426169 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tjjw" event={"ID":"bb28594a-2fde-4b1e-8caa-0590add450bb","Type":"ContainerStarted","Data":"10333e218a7f7edaa1a55b275af14e6715fe904bb2cf1503c8060c9b44e5c03a"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.427121 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"53311c89-f482-4136-beab-69e0c69ed6e3","Type":"ContainerStarted","Data":"7cc0f180d5d79f15bf7829e43d0f8bf7205075a8d8fc57c553ef169a8c022210"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.428943 5007 generic.go:334] "Generic (PLEG): container finished" podID="8ec8b75e-b2fc-4d54-b528-4539856358b7" containerID="c54d910c69c13fcfbc6eb26267513980b41ffe0ce0f947bd0cca679fd30562b5" exitCode=0 Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.429210 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" event={"ID":"8ec8b75e-b2fc-4d54-b528-4539856358b7","Type":"ContainerDied","Data":"c54d910c69c13fcfbc6eb26267513980b41ffe0ce0f947bd0cca679fd30562b5"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.429259 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" event={"ID":"8ec8b75e-b2fc-4d54-b528-4539856358b7","Type":"ContainerStarted","Data":"f95e9a2ea8ef6af266c3b5233116f2246f7519598d20db734fcd5c4310eecb20"} Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.540641 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-8qp92"] Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.559040 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-8qp92"] Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.576364 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-dwh9w"] Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.582691 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-dwh9w"] Feb 18 19:54:49 crc kubenswrapper[5007]: E0218 19:54:49.664783 5007 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode93eae86_9b71_4a99_a981_e1dd6c2a028d.slice/crio-ff3a6d5c978e9e20967a800d360423974f62a1ae85be6fa70a66b0bb4d6e8ab0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode93eae86_9b71_4a99_a981_e1dd6c2a028d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b1ffc7a_47d3_4119_a38b_618b01856cab.slice\": RecentStats: unable to find data in memory cache]" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.919516 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b1ffc7a-47d3-4119-a38b-618b01856cab" path="/var/lib/kubelet/pods/1b1ffc7a-47d3-4119-a38b-618b01856cab/volumes" Feb 18 19:54:49 crc kubenswrapper[5007]: I0218 19:54:49.920072 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93eae86-9b71-4a99-a981-e1dd6c2a028d" path="/var/lib/kubelet/pods/e93eae86-9b71-4a99-a981-e1dd6c2a028d/volumes" Feb 18 19:54:52 crc kubenswrapper[5007]: I0218 19:54:52.877781 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:52 crc kubenswrapper[5007]: I0218 19:54:52.905569 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-dns-svc\") pod \"8ec8b75e-b2fc-4d54-b528-4539856358b7\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " Feb 18 19:54:52 crc kubenswrapper[5007]: I0218 19:54:52.905617 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-config\") pod \"8ec8b75e-b2fc-4d54-b528-4539856358b7\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " Feb 18 19:54:52 crc kubenswrapper[5007]: I0218 19:54:52.905677 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gsl2\" (UniqueName: \"kubernetes.io/projected/8ec8b75e-b2fc-4d54-b528-4539856358b7-kube-api-access-7gsl2\") pod \"8ec8b75e-b2fc-4d54-b528-4539856358b7\" (UID: \"8ec8b75e-b2fc-4d54-b528-4539856358b7\") " Feb 18 19:54:52 crc kubenswrapper[5007]: I0218 19:54:52.926707 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec8b75e-b2fc-4d54-b528-4539856358b7-kube-api-access-7gsl2" (OuterVolumeSpecName: "kube-api-access-7gsl2") pod "8ec8b75e-b2fc-4d54-b528-4539856358b7" (UID: "8ec8b75e-b2fc-4d54-b528-4539856358b7"). InnerVolumeSpecName "kube-api-access-7gsl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[5007]: I0218 19:54:52.943094 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-config" (OuterVolumeSpecName: "config") pod "8ec8b75e-b2fc-4d54-b528-4539856358b7" (UID: "8ec8b75e-b2fc-4d54-b528-4539856358b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[5007]: I0218 19:54:52.944752 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ec8b75e-b2fc-4d54-b528-4539856358b7" (UID: "8ec8b75e-b2fc-4d54-b528-4539856358b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.009460 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.009489 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec8b75e-b2fc-4d54-b528-4539856358b7-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.009504 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gsl2\" (UniqueName: \"kubernetes.io/projected/8ec8b75e-b2fc-4d54-b528-4539856358b7-kube-api-access-7gsl2\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.464795 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" event={"ID":"8ec8b75e-b2fc-4d54-b528-4539856358b7","Type":"ContainerDied","Data":"f95e9a2ea8ef6af266c3b5233116f2246f7519598d20db734fcd5c4310eecb20"} Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.464849 5007 scope.go:117] "RemoveContainer" containerID="c54d910c69c13fcfbc6eb26267513980b41ffe0ce0f947bd0cca679fd30562b5" Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.464969 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d96cb4cc5-hgz8p" Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.640609 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d96cb4cc5-hgz8p"] Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.662948 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d96cb4cc5-hgz8p"] Feb 18 19:54:53 crc kubenswrapper[5007]: I0218 19:54:53.930733 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec8b75e-b2fc-4d54-b528-4539856358b7" path="/var/lib/kubelet/pods/8ec8b75e-b2fc-4d54-b528-4539856358b7/volumes" Feb 18 19:54:57 crc kubenswrapper[5007]: I0218 19:54:57.505231 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766f65774c-c47q7" event={"ID":"debb8181-4165-44fd-83ed-0698c664676a","Type":"ContainerStarted","Data":"74c2b22c662150ada6d8d0e3893554cce11e3aa395f5dbd315c3aa22b37a8e62"} Feb 18 19:54:57 crc kubenswrapper[5007]: I0218 19:54:57.505786 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:54:57 crc kubenswrapper[5007]: I0218 19:54:57.525987 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-766f65774c-c47q7" podStartSLOduration=24.525968422 podStartE2EDuration="24.525968422s" podCreationTimestamp="2026-02-18 19:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:57.522988437 +0000 UTC m=+982.305107981" watchObservedRunningTime="2026-02-18 19:54:57.525968422 +0000 UTC m=+982.308087946" Feb 18 19:54:58 crc kubenswrapper[5007]: I0218 19:54:58.519942 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"53311c89-f482-4136-beab-69e0c69ed6e3","Type":"ContainerStarted","Data":"1db1dd962d29ca839989cefbd22938631b63a1dd2ed26e781c5727f3d9fcd7cd"} Feb 18 19:54:58 crc kubenswrapper[5007]: I0218 19:54:58.520388 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 19:54:58 crc kubenswrapper[5007]: I0218 19:54:58.561158 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.706761616 podStartE2EDuration="21.56112393s" podCreationTimestamp="2026-02-18 19:54:37 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.434662484 +0000 UTC m=+973.216782008" lastFinishedPulling="2026-02-18 19:54:56.289024798 +0000 UTC m=+981.071144322" observedRunningTime="2026-02-18 19:54:58.554838801 +0000 UTC m=+983.336958365" watchObservedRunningTime="2026-02-18 19:54:58.56112393 +0000 UTC m=+983.343243494" Feb 18 19:54:59 crc kubenswrapper[5007]: I0218 19:54:59.528982 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4","Type":"ContainerStarted","Data":"e109b1418ef2ed65126d6bb72edca8f3c651d9fc301d266c4542ca38fa2c217b"} Feb 18 19:54:59 crc kubenswrapper[5007]: I0218 19:54:59.533015 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8656c9d3-1d71-4e5d-a00d-2031808281d3","Type":"ContainerStarted","Data":"c515cdd77d85bc4c8b2a72ff9200e38f77fea7dfd686805fb3bd7ede95b5669b"} Feb 18 19:54:59 crc kubenswrapper[5007]: I0218 19:54:59.536015 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0de2ea3c-89f5-4df2-841e-904cebc9fb7d","Type":"ContainerStarted","Data":"08f86b173dcc26ea5040a38b8d89e2c88e6606fae2204ca9057e6830ba31fe94"} Feb 18 19:55:00 crc kubenswrapper[5007]: I0218 19:55:00.545618 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" event={"ID":"a2a9bc90-43bb-45f9-8c66-f39508e55213","Type":"ContainerStarted","Data":"14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644"} Feb 18 19:55:00 crc kubenswrapper[5007]: I0218 19:55:00.545996 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:55:00 crc kubenswrapper[5007]: I0218 19:55:00.548223 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9e731197-939b-4501-8baf-54690037a9ca","Type":"ContainerStarted","Data":"4aeac6dee38be1af8a76b7cf61c11e515cf29268bc0d7a311714eb2b24e973d8"} Feb 18 19:55:00 crc kubenswrapper[5007]: I0218 19:55:00.548449 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:55:00 crc kubenswrapper[5007]: I0218 19:55:00.551902 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48869d89-959c-4f88-b8cb-1eb83e7fce30","Type":"ContainerStarted","Data":"f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247"} Feb 18 19:55:00 crc kubenswrapper[5007]: I0218 19:55:00.554029 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43aa0db4-e558-4914-9ef5-249b2c6f4328","Type":"ContainerStarted","Data":"97f5c128a07358ecbb17e4e7eb2ba9ecf6464fe1e35ec06f93de47adf3af1a90"} Feb 18 19:55:00 crc kubenswrapper[5007]: I0218 19:55:00.568577 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" podStartSLOduration=27.457812738 podStartE2EDuration="27.568536494s" podCreationTimestamp="2026-02-18 19:54:33 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.461894683 +0000 UTC m=+973.244014207" lastFinishedPulling="2026-02-18 19:54:48.572618439 +0000 UTC m=+973.354737963" observedRunningTime="2026-02-18 19:55:00.565860328 +0000 UTC m=+985.347979862" watchObservedRunningTime="2026-02-18 19:55:00.568536494 +0000 UTC m=+985.350656018" Feb 18 19:55:00 crc kubenswrapper[5007]: I0218 19:55:00.632155 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.83657838 podStartE2EDuration="21.632136175s" podCreationTimestamp="2026-02-18 19:54:39 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.847245694 +0000 UTC m=+973.629365218" lastFinishedPulling="2026-02-18 19:54:57.642803489 +0000 UTC m=+982.424923013" observedRunningTime="2026-02-18 19:55:00.627562795 +0000 UTC m=+985.409682339" watchObservedRunningTime="2026-02-18 19:55:00.632136175 +0000 UTC m=+985.414255699" Feb 18 19:55:01 crc kubenswrapper[5007]: I0218 19:55:01.562631 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerStarted","Data":"29a3e7e4dce02b197c67122d6a2f330abb32ef62b89cafcf0832560824364f47"} Feb 18 19:55:01 crc kubenswrapper[5007]: I0218 19:55:01.565520 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7jbdh" event={"ID":"42758d74-6ff7-4d38-960f-41e387147352","Type":"ContainerStarted","Data":"bb8764f45900c67c3f55836cc683fd2edd764fd6383740f2eabce32d62d6e41a"} Feb 18 19:55:01 crc kubenswrapper[5007]: I0218 19:55:01.565753 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7jbdh" Feb 18 19:55:01 crc kubenswrapper[5007]: I0218 19:55:01.566754 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e34afaa-d033-4975-9f1f-e81ab6b743a9","Type":"ContainerStarted","Data":"d13bd835c7a75e0fd047866199f573f5ab6e93a40365b1ee95241dd3f9c41398"} Feb 18 19:55:01 crc kubenswrapper[5007]: I0218 19:55:01.568253 5007 generic.go:334] "Generic (PLEG): container finished" podID="bb28594a-2fde-4b1e-8caa-0590add450bb" containerID="f1b8f2c5c36c4160b08247725ec7b3dec339667d2de096bb5239c64c52dca732" exitCode=0 Feb 18 19:55:01 crc kubenswrapper[5007]: I0218 19:55:01.568298 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tjjw" event={"ID":"bb28594a-2fde-4b1e-8caa-0590add450bb","Type":"ContainerDied","Data":"f1b8f2c5c36c4160b08247725ec7b3dec339667d2de096bb5239c64c52dca732"} Feb 18 19:55:01 crc kubenswrapper[5007]: I0218 19:55:01.570710 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"9c7639d7-1f64-49c0-981d-c6231836c6d4","Type":"ContainerStarted","Data":"3b9b19856216126457701dad5a9b80a4771ad63b0183a1dd02f7bcfc185807c2"} Feb 18 19:55:01 crc kubenswrapper[5007]: I0218 19:55:01.637317 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7jbdh" podStartSLOduration=11.618439357 podStartE2EDuration="19.637297699s" podCreationTimestamp="2026-02-18 19:54:42 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.865775717 +0000 UTC m=+973.647895241" lastFinishedPulling="2026-02-18 19:54:56.884634059 +0000 UTC m=+981.666753583" observedRunningTime="2026-02-18 19:55:01.634059827 +0000 UTC m=+986.416179351" watchObservedRunningTime="2026-02-18 19:55:01.637297699 +0000 UTC m=+986.419417223" Feb 18 19:55:02 crc kubenswrapper[5007]: I0218 19:55:02.583614 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8656c9d3-1d71-4e5d-a00d-2031808281d3","Type":"ContainerStarted","Data":"ffc8bc075836acc9f7c94267e58629eafdc77e5d23bc129984a64ba43012bea5"} Feb 18 19:55:02 crc kubenswrapper[5007]: I0218 19:55:02.586138 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0de2ea3c-89f5-4df2-841e-904cebc9fb7d","Type":"ContainerStarted","Data":"462e193f3804ccc7e40020b7a0759c57bfaf6724a63d132ded89a06ce34bd700"} Feb 18 19:55:02 crc kubenswrapper[5007]: I0218 19:55:02.588785 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tjjw" event={"ID":"bb28594a-2fde-4b1e-8caa-0590add450bb","Type":"ContainerStarted","Data":"1a69d3f69e60613e15d49f46c95dbc91d7012979b3e7554f542281560771fe77"} Feb 18 19:55:02 crc kubenswrapper[5007]: I0218 19:55:02.588837 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6tjjw" event={"ID":"bb28594a-2fde-4b1e-8caa-0590add450bb","Type":"ContainerStarted","Data":"2243f4e44e5a2782659426f9e3fb026195cdaea3a4fe27c7e78d6127ffb55157"} Feb 18 19:55:02 crc kubenswrapper[5007]: I0218 19:55:02.611317 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.899752322 podStartE2EDuration="17.611298876s" podCreationTimestamp="2026-02-18 19:54:45 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.986948458 +0000 UTC m=+973.769067972" lastFinishedPulling="2026-02-18 19:55:01.698494992 +0000 UTC m=+986.480614526" observedRunningTime="2026-02-18 19:55:02.605005987 +0000 UTC m=+987.387125561" watchObservedRunningTime="2026-02-18 19:55:02.611298876 +0000 UTC m=+987.393418390" Feb 18 19:55:02 crc kubenswrapper[5007]: I0218 19:55:02.644924 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6tjjw" podStartSLOduration=13.37642846 podStartE2EDuration="20.644902073s" podCreationTimestamp="2026-02-18 19:54:42 +0000 UTC" firstStartedPulling="2026-02-18 19:54:49.278586313 +0000 UTC m=+974.060705837" lastFinishedPulling="2026-02-18 19:54:56.547059926 +0000 UTC m=+981.329179450" observedRunningTime="2026-02-18 19:55:02.641594359 +0000 UTC m=+987.423713893" watchObservedRunningTime="2026-02-18 19:55:02.644902073 +0000 UTC m=+987.427021607" Feb 18 19:55:02 crc kubenswrapper[5007]: I0218 19:55:02.675411 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.111214956 podStartE2EDuration="20.675386641s" podCreationTimestamp="2026-02-18 19:54:42 +0000 UTC" firstStartedPulling="2026-02-18 19:54:49.143307043 +0000 UTC m=+973.925426567" lastFinishedPulling="2026-02-18 19:55:01.707478728 +0000 UTC m=+986.489598252" observedRunningTime="2026-02-18 19:55:02.665921041 +0000 UTC m=+987.448040595" watchObservedRunningTime="2026-02-18 19:55:02.675386641 +0000 UTC m=+987.457506175" Feb 18 19:55:02 crc kubenswrapper[5007]: I0218 19:55:02.908774 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 19:55:03 crc kubenswrapper[5007]: I0218 19:55:03.222181 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:55:03 crc kubenswrapper[5007]: I0218 19:55:03.222221 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:55:03 crc kubenswrapper[5007]: I0218 19:55:03.477669 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:55:04 crc kubenswrapper[5007]: I0218 19:55:04.104511 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.104250 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.130822 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.186290 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.209573 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.616917 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.687702 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.693330 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.920982 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-2cxpn"] Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.921247 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" podUID="a2a9bc90-43bb-45f9-8c66-f39508e55213" containerName="dnsmasq-dns" containerID="cri-o://14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644" gracePeriod=10 Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.924903 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.945248 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86bdb9bf87-9vrf2"] Feb 18 19:55:05 crc kubenswrapper[5007]: E0218 19:55:05.945658 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec8b75e-b2fc-4d54-b528-4539856358b7" containerName="init" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.945674 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec8b75e-b2fc-4d54-b528-4539856358b7" containerName="init" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.945831 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec8b75e-b2fc-4d54-b528-4539856358b7" containerName="init" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.946719 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.948532 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.975880 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-ovsdbserver-nb\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.975950 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pr6s\" (UniqueName: \"kubernetes.io/projected/999d7473-68ed-4ef4-96ad-02b051cfc893-kube-api-access-6pr6s\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.975989 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-config\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.976015 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-dns-svc\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:05 crc kubenswrapper[5007]: I0218 19:55:05.992238 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bdb9bf87-9vrf2"] Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.021586 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-zxr7w"] Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.022706 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.027054 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.063219 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zxr7w"] Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.077852 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pr6s\" (UniqueName: \"kubernetes.io/projected/999d7473-68ed-4ef4-96ad-02b051cfc893-kube-api-access-6pr6s\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.077917 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-config\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.077954 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-dns-svc\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.078568 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-ovsdbserver-nb\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.079378 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-config\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.079504 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-ovsdbserver-nb\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.080158 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-dns-svc\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.106143 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.112294 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.118114 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.118454 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.118516 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.118299 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-gvzvb" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.122633 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.132921 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bdb9bf87-9vrf2"] Feb 18 19:55:06 crc kubenswrapper[5007]: E0218 19:55:06.139388 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-6pr6s], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" podUID="999d7473-68ed-4ef4-96ad-02b051cfc893" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.141227 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pr6s\" (UniqueName: \"kubernetes.io/projected/999d7473-68ed-4ef4-96ad-02b051cfc893-kube-api-access-6pr6s\") pod \"dnsmasq-dns-86bdb9bf87-9vrf2\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.179632 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8bbb4181-874e-4130-9f3d-da6874ae7d45-ovn-rundir\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.179697 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6fvm\" (UniqueName: \"kubernetes.io/projected/8bbb4181-874e-4130-9f3d-da6874ae7d45-kube-api-access-m6fvm\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.179749 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bbb4181-874e-4130-9f3d-da6874ae7d45-config\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.179805 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbb4181-874e-4130-9f3d-da6874ae7d45-combined-ca-bundle\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.179832 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8bbb4181-874e-4130-9f3d-da6874ae7d45-ovs-rundir\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.180083 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbb4181-874e-4130-9f3d-da6874ae7d45-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.183695 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-7mv4v"] Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.187835 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.191700 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.199558 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-7mv4v"] Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281019 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281067 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281100 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8bbb4181-874e-4130-9f3d-da6874ae7d45-ovn-rundir\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281127 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lf65\" (UniqueName: \"kubernetes.io/projected/8154ea23-9385-4586-b248-cd5d84f9e343-kube-api-access-4lf65\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281146 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6fvm\" (UniqueName: \"kubernetes.io/projected/8bbb4181-874e-4130-9f3d-da6874ae7d45-kube-api-access-m6fvm\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281173 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bbb4181-874e-4130-9f3d-da6874ae7d45-config\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281188 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8154ea23-9385-4586-b248-cd5d84f9e343-config\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281203 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281234 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbb4181-874e-4130-9f3d-da6874ae7d45-combined-ca-bundle\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281250 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8bbb4181-874e-4130-9f3d-da6874ae7d45-ovs-rundir\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281282 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbb4181-874e-4130-9f3d-da6874ae7d45-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281304 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8154ea23-9385-4586-b248-cd5d84f9e343-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281328 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8154ea23-9385-4586-b248-cd5d84f9e343-scripts\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.281948 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8bbb4181-874e-4130-9f3d-da6874ae7d45-ovn-rundir\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.282246 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8bbb4181-874e-4130-9f3d-da6874ae7d45-ovs-rundir\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.283054 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bbb4181-874e-4130-9f3d-da6874ae7d45-config\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.285628 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbb4181-874e-4130-9f3d-da6874ae7d45-combined-ca-bundle\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.296066 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbb4181-874e-4130-9f3d-da6874ae7d45-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.300046 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6fvm\" (UniqueName: \"kubernetes.io/projected/8bbb4181-874e-4130-9f3d-da6874ae7d45-kube-api-access-m6fvm\") pod \"ovn-controller-metrics-zxr7w\" (UID: \"8bbb4181-874e-4130-9f3d-da6874ae7d45\") " pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383022 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-sb\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383078 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8154ea23-9385-4586-b248-cd5d84f9e343-scripts\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383103 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-config\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383131 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-dns-svc\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383154 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383177 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383209 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lf65\" (UniqueName: \"kubernetes.io/projected/8154ea23-9385-4586-b248-cd5d84f9e343-kube-api-access-4lf65\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383230 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-nb\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383260 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8154ea23-9385-4586-b248-cd5d84f9e343-config\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383278 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383321 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjks\" (UniqueName: \"kubernetes.io/projected/ae408ee7-67b9-47a0-abe3-d213994382dc-kube-api-access-nhjks\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.383369 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8154ea23-9385-4586-b248-cd5d84f9e343-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.384025 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8154ea23-9385-4586-b248-cd5d84f9e343-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.384911 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8154ea23-9385-4586-b248-cd5d84f9e343-config\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.385296 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8154ea23-9385-4586-b248-cd5d84f9e343-scripts\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.389836 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.389952 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.400866 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lf65\" (UniqueName: \"kubernetes.io/projected/8154ea23-9385-4586-b248-cd5d84f9e343-kube-api-access-4lf65\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.401023 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zxr7w" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.419265 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8154ea23-9385-4586-b248-cd5d84f9e343-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8154ea23-9385-4586-b248-cd5d84f9e343\") " pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.448691 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.484533 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-sb\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.484602 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-config\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.484633 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-dns-svc\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.484697 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-nb\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.484757 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjks\" (UniqueName: \"kubernetes.io/projected/ae408ee7-67b9-47a0-abe3-d213994382dc-kube-api-access-nhjks\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.486126 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-sb\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.486182 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-dns-svc\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.486858 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-nb\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.487158 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-config\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.495920 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.505492 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjks\" (UniqueName: \"kubernetes.io/projected/ae408ee7-67b9-47a0-abe3-d213994382dc-kube-api-access-nhjks\") pod \"dnsmasq-dns-599f5467c5-7mv4v\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.522259 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.586159 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-config\") pod \"a2a9bc90-43bb-45f9-8c66-f39508e55213\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.586214 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w6rb\" (UniqueName: \"kubernetes.io/projected/a2a9bc90-43bb-45f9-8c66-f39508e55213-kube-api-access-7w6rb\") pod \"a2a9bc90-43bb-45f9-8c66-f39508e55213\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.586392 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-dns-svc\") pod \"a2a9bc90-43bb-45f9-8c66-f39508e55213\" (UID: \"a2a9bc90-43bb-45f9-8c66-f39508e55213\") " Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.596711 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a9bc90-43bb-45f9-8c66-f39508e55213-kube-api-access-7w6rb" (OuterVolumeSpecName: "kube-api-access-7w6rb") pod "a2a9bc90-43bb-45f9-8c66-f39508e55213" (UID: "a2a9bc90-43bb-45f9-8c66-f39508e55213"). InnerVolumeSpecName "kube-api-access-7w6rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.654857 5007 generic.go:334] "Generic (PLEG): container finished" podID="a2a9bc90-43bb-45f9-8c66-f39508e55213" containerID="14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644" exitCode=0 Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.658552 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" event={"ID":"a2a9bc90-43bb-45f9-8c66-f39508e55213","Type":"ContainerDied","Data":"14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644"} Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.658622 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" event={"ID":"a2a9bc90-43bb-45f9-8c66-f39508e55213","Type":"ContainerDied","Data":"6219da3e3ab136fce13a2f91c2c35d61b63fb91b21dd12e1c08eb5e1db5da1d7"} Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.658657 5007 scope.go:117] "RemoveContainer" containerID="14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.658723 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7587d7df99-2cxpn" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.658934 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.663759 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2a9bc90-43bb-45f9-8c66-f39508e55213" (UID: "a2a9bc90-43bb-45f9-8c66-f39508e55213"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.664308 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-config" (OuterVolumeSpecName: "config") pod "a2a9bc90-43bb-45f9-8c66-f39508e55213" (UID: "a2a9bc90-43bb-45f9-8c66-f39508e55213"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.671188 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.689922 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.689946 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a9bc90-43bb-45f9-8c66-f39508e55213-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.689957 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w6rb\" (UniqueName: \"kubernetes.io/projected/a2a9bc90-43bb-45f9-8c66-f39508e55213-kube-api-access-7w6rb\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.693539 5007 scope.go:117] "RemoveContainer" containerID="277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.722593 5007 scope.go:117] "RemoveContainer" containerID="14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644" Feb 18 19:55:06 crc kubenswrapper[5007]: E0218 19:55:06.723011 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644\": container with ID starting with 14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644 not found: ID does not exist" containerID="14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.723051 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644"} err="failed to get container status \"14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644\": rpc error: code = NotFound desc = could not find container \"14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644\": container with ID starting with 14244ba0d720c7e6a492dbd74f8b7c1b065530cbe93c59821f13a30474d4a644 not found: ID does not exist" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.723074 5007 scope.go:117] "RemoveContainer" containerID="277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f" Feb 18 19:55:06 crc kubenswrapper[5007]: E0218 19:55:06.723375 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f\": container with ID starting with 277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f not found: ID does not exist" containerID="277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.723392 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f"} err="failed to get container status \"277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f\": rpc error: code = NotFound desc = could not find container \"277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f\": container with ID starting with 277b33201858ee499b9dc768d70b6d58300120b1f523cdc706276603df064f7f not found: ID does not exist" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.791230 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-ovsdbserver-nb\") pod \"999d7473-68ed-4ef4-96ad-02b051cfc893\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.791324 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pr6s\" (UniqueName: \"kubernetes.io/projected/999d7473-68ed-4ef4-96ad-02b051cfc893-kube-api-access-6pr6s\") pod \"999d7473-68ed-4ef4-96ad-02b051cfc893\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.791398 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-config\") pod \"999d7473-68ed-4ef4-96ad-02b051cfc893\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.791484 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-dns-svc\") pod \"999d7473-68ed-4ef4-96ad-02b051cfc893\" (UID: \"999d7473-68ed-4ef4-96ad-02b051cfc893\") " Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.791737 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "999d7473-68ed-4ef4-96ad-02b051cfc893" (UID: "999d7473-68ed-4ef4-96ad-02b051cfc893"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.791836 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-config" (OuterVolumeSpecName: "config") pod "999d7473-68ed-4ef4-96ad-02b051cfc893" (UID: "999d7473-68ed-4ef4-96ad-02b051cfc893"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.791918 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.791929 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.792526 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "999d7473-68ed-4ef4-96ad-02b051cfc893" (UID: "999d7473-68ed-4ef4-96ad-02b051cfc893"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.795007 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/999d7473-68ed-4ef4-96ad-02b051cfc893-kube-api-access-6pr6s" (OuterVolumeSpecName: "kube-api-access-6pr6s") pod "999d7473-68ed-4ef4-96ad-02b051cfc893" (UID: "999d7473-68ed-4ef4-96ad-02b051cfc893"). InnerVolumeSpecName "kube-api-access-6pr6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.893745 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/999d7473-68ed-4ef4-96ad-02b051cfc893-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.893795 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pr6s\" (UniqueName: \"kubernetes.io/projected/999d7473-68ed-4ef4-96ad-02b051cfc893-kube-api-access-6pr6s\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:06 crc kubenswrapper[5007]: I0218 19:55:06.933793 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zxr7w"] Feb 18 19:55:06 crc kubenswrapper[5007]: W0218 19:55:06.937261 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bbb4181_874e_4130_9f3d_da6874ae7d45.slice/crio-5f2f8b7e45921a998547e38110e54ff7eeaa8352d62268bd5be740c5fd458406 WatchSource:0}: Error finding container 5f2f8b7e45921a998547e38110e54ff7eeaa8352d62268bd5be740c5fd458406: Status 404 returned error can't find the container with id 5f2f8b7e45921a998547e38110e54ff7eeaa8352d62268bd5be740c5fd458406 Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.001684 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-2cxpn"] Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.011603 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7587d7df99-2cxpn"] Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.028198 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:55:07 crc kubenswrapper[5007]: W0218 19:55:07.028484 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8154ea23_9385_4586_b248_cd5d84f9e343.slice/crio-135a1d8141747b0706b5dcacbd7211ac44b84e4ed66c0bdbdaf3ac6d7c0f95e6 WatchSource:0}: Error finding container 135a1d8141747b0706b5dcacbd7211ac44b84e4ed66c0bdbdaf3ac6d7c0f95e6: Status 404 returned error can't find the container with id 135a1d8141747b0706b5dcacbd7211ac44b84e4ed66c0bdbdaf3ac6d7c0f95e6 Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.082791 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-7mv4v"] Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.666781 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zxr7w" event={"ID":"8bbb4181-874e-4130-9f3d-da6874ae7d45","Type":"ContainerStarted","Data":"02462754ed90e160dbb581d98dd75624d650d6348c0c898d6c9d6b26d3c7267a"} Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.667119 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zxr7w" event={"ID":"8bbb4181-874e-4130-9f3d-da6874ae7d45","Type":"ContainerStarted","Data":"5f2f8b7e45921a998547e38110e54ff7eeaa8352d62268bd5be740c5fd458406"} Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.670845 5007 generic.go:334] "Generic (PLEG): container finished" podID="d0e8a83e-b230-482d-bcf9-d2d8f5d474e4" containerID="e109b1418ef2ed65126d6bb72edca8f3c651d9fc301d266c4542ca38fa2c217b" exitCode=0 Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.670969 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4","Type":"ContainerDied","Data":"e109b1418ef2ed65126d6bb72edca8f3c651d9fc301d266c4542ca38fa2c217b"} Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.672290 5007 generic.go:334] "Generic (PLEG): container finished" podID="ae408ee7-67b9-47a0-abe3-d213994382dc" containerID="870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181" exitCode=0 Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.672339 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" event={"ID":"ae408ee7-67b9-47a0-abe3-d213994382dc","Type":"ContainerDied","Data":"870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181"} Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.672354 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" event={"ID":"ae408ee7-67b9-47a0-abe3-d213994382dc","Type":"ContainerStarted","Data":"74fec85481d2de805299f2d3e3ce69c28bd8e76f2c6dcde3fd97117c4c5be449"} Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.674590 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bdb9bf87-9vrf2" Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.678822 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8154ea23-9385-4586-b248-cd5d84f9e343","Type":"ContainerStarted","Data":"135a1d8141747b0706b5dcacbd7211ac44b84e4ed66c0bdbdaf3ac6d7c0f95e6"} Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.687398 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-zxr7w" podStartSLOduration=2.687382645 podStartE2EDuration="2.687382645s" podCreationTimestamp="2026-02-18 19:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:07.685866501 +0000 UTC m=+992.467986025" watchObservedRunningTime="2026-02-18 19:55:07.687382645 +0000 UTC m=+992.469502169" Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.788711 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bdb9bf87-9vrf2"] Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.794483 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86bdb9bf87-9vrf2"] Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.921534 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="999d7473-68ed-4ef4-96ad-02b051cfc893" path="/var/lib/kubelet/pods/999d7473-68ed-4ef4-96ad-02b051cfc893/volumes" Feb 18 19:55:07 crc kubenswrapper[5007]: I0218 19:55:07.922096 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a9bc90-43bb-45f9-8c66-f39508e55213" path="/var/lib/kubelet/pods/a2a9bc90-43bb-45f9-8c66-f39508e55213/volumes" Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.683640 5007 generic.go:334] "Generic (PLEG): container finished" podID="43aa0db4-e558-4914-9ef5-249b2c6f4328" containerID="97f5c128a07358ecbb17e4e7eb2ba9ecf6464fe1e35ec06f93de47adf3af1a90" exitCode=0 Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.683741 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43aa0db4-e558-4914-9ef5-249b2c6f4328","Type":"ContainerDied","Data":"97f5c128a07358ecbb17e4e7eb2ba9ecf6464fe1e35ec06f93de47adf3af1a90"} Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.686826 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d0e8a83e-b230-482d-bcf9-d2d8f5d474e4","Type":"ContainerStarted","Data":"7ac4c2bec3467f17791830c41d0b69ac927324999abce80ca5adae7009a75184"} Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.689894 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" event={"ID":"ae408ee7-67b9-47a0-abe3-d213994382dc","Type":"ContainerStarted","Data":"3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe"} Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.691139 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.692880 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8154ea23-9385-4586-b248-cd5d84f9e343","Type":"ContainerStarted","Data":"3be56f7d500b3e329b1c02358c730b86985f5f68c279b7223609856092b31636"} Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.692921 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8154ea23-9385-4586-b248-cd5d84f9e343","Type":"ContainerStarted","Data":"d6e05b4246334b3ae903e1b2b051bb8d0e91a0bb9afa792368cbfee3ef81a516"} Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.693646 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.702262 5007 generic.go:334] "Generic (PLEG): container finished" podID="d5c55070-9969-4f85-be20-230c847f2722" containerID="29a3e7e4dce02b197c67122d6a2f330abb32ef62b89cafcf0832560824364f47" exitCode=0 Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.703065 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerDied","Data":"29a3e7e4dce02b197c67122d6a2f330abb32ef62b89cafcf0832560824364f47"} Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.735394 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.082040963 podStartE2EDuration="2.735366918s" podCreationTimestamp="2026-02-18 19:55:06 +0000 UTC" firstStartedPulling="2026-02-18 19:55:07.032619389 +0000 UTC m=+991.814738923" lastFinishedPulling="2026-02-18 19:55:07.685945354 +0000 UTC m=+992.468064878" observedRunningTime="2026-02-18 19:55:08.734619127 +0000 UTC m=+993.516738681" watchObservedRunningTime="2026-02-18 19:55:08.735366918 +0000 UTC m=+993.517486482" Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.760675 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" podStartSLOduration=2.760654598 podStartE2EDuration="2.760654598s" podCreationTimestamp="2026-02-18 19:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:08.758287301 +0000 UTC m=+993.540406845" watchObservedRunningTime="2026-02-18 19:55:08.760654598 +0000 UTC m=+993.542774132" Feb 18 19:55:08 crc kubenswrapper[5007]: I0218 19:55:08.789597 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.002653466 podStartE2EDuration="34.789577582s" podCreationTimestamp="2026-02-18 19:54:34 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.81948309 +0000 UTC m=+973.601602614" lastFinishedPulling="2026-02-18 19:54:56.606407206 +0000 UTC m=+981.388526730" observedRunningTime="2026-02-18 19:55:08.781354237 +0000 UTC m=+993.563473771" watchObservedRunningTime="2026-02-18 19:55:08.789577582 +0000 UTC m=+993.571697116" Feb 18 19:55:09 crc kubenswrapper[5007]: I0218 19:55:09.715933 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43aa0db4-e558-4914-9ef5-249b2c6f4328","Type":"ContainerStarted","Data":"87508031f0c279f53e6de9c95a3558a37282601e83c7f4dba5ee0bfc2d43747f"} Feb 18 19:55:09 crc kubenswrapper[5007]: I0218 19:55:09.755966 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.288785461 podStartE2EDuration="33.755943111s" podCreationTimestamp="2026-02-18 19:54:36 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.247690414 +0000 UTC m=+973.029809938" lastFinishedPulling="2026-02-18 19:54:56.714848054 +0000 UTC m=+981.496967588" observedRunningTime="2026-02-18 19:55:09.744775453 +0000 UTC m=+994.526895007" watchObservedRunningTime="2026-02-18 19:55:09.755943111 +0000 UTC m=+994.538062665" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.101804 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-7mv4v"] Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.139532 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c566df67-pzcjs"] Feb 18 19:55:10 crc kubenswrapper[5007]: E0218 19:55:10.139983 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a9bc90-43bb-45f9-8c66-f39508e55213" containerName="init" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.140001 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a9bc90-43bb-45f9-8c66-f39508e55213" containerName="init" Feb 18 19:55:10 crc kubenswrapper[5007]: E0218 19:55:10.140016 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a9bc90-43bb-45f9-8c66-f39508e55213" containerName="dnsmasq-dns" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.140022 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a9bc90-43bb-45f9-8c66-f39508e55213" containerName="dnsmasq-dns" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.140202 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a9bc90-43bb-45f9-8c66-f39508e55213" containerName="dnsmasq-dns" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.141307 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.155386 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-pzcjs"] Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.190867 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.255817 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-sb\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.255925 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-dns-svc\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.255949 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-nb\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.255969 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-config\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.255999 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm4zx\" (UniqueName: \"kubernetes.io/projected/8659fece-e93f-4032-9050-659bc594f86b-kube-api-access-jm4zx\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.357725 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-dns-svc\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.357786 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-nb\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.357845 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-config\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.357885 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm4zx\" (UniqueName: \"kubernetes.io/projected/8659fece-e93f-4032-9050-659bc594f86b-kube-api-access-jm4zx\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.358006 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-sb\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.359229 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-sb\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.360012 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-dns-svc\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.360517 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-config\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.361480 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-nb\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.397911 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm4zx\" (UniqueName: \"kubernetes.io/projected/8659fece-e93f-4032-9050-659bc594f86b-kube-api-access-jm4zx\") pod \"dnsmasq-dns-75c566df67-pzcjs\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.462965 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.734683 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" podUID="ae408ee7-67b9-47a0-abe3-d213994382dc" containerName="dnsmasq-dns" containerID="cri-o://3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe" gracePeriod=10 Feb 18 19:55:10 crc kubenswrapper[5007]: I0218 19:55:10.951574 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-pzcjs"] Feb 18 19:55:10 crc kubenswrapper[5007]: W0218 19:55:10.954739 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8659fece_e93f_4032_9050_659bc594f86b.slice/crio-f16d49881219ac9b71743cdbf235634c401322ce95c2b5a7ff4ccf13b9066b6f WatchSource:0}: Error finding container f16d49881219ac9b71743cdbf235634c401322ce95c2b5a7ff4ccf13b9066b6f: Status 404 returned error can't find the container with id f16d49881219ac9b71743cdbf235634c401322ce95c2b5a7ff4ccf13b9066b6f Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.147223 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.233316 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:55:11 crc kubenswrapper[5007]: E0218 19:55:11.233669 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae408ee7-67b9-47a0-abe3-d213994382dc" containerName="init" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.233686 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae408ee7-67b9-47a0-abe3-d213994382dc" containerName="init" Feb 18 19:55:11 crc kubenswrapper[5007]: E0218 19:55:11.233723 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae408ee7-67b9-47a0-abe3-d213994382dc" containerName="dnsmasq-dns" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.233730 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae408ee7-67b9-47a0-abe3-d213994382dc" containerName="dnsmasq-dns" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.233875 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae408ee7-67b9-47a0-abe3-d213994382dc" containerName="dnsmasq-dns" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.239627 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.242862 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qpxgd" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.243140 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.243221 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.244969 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.268668 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.284361 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-nb\") pod \"ae408ee7-67b9-47a0-abe3-d213994382dc\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.284484 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-sb\") pod \"ae408ee7-67b9-47a0-abe3-d213994382dc\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.284623 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-dns-svc\") pod \"ae408ee7-67b9-47a0-abe3-d213994382dc\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.284717 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhjks\" (UniqueName: \"kubernetes.io/projected/ae408ee7-67b9-47a0-abe3-d213994382dc-kube-api-access-nhjks\") pod \"ae408ee7-67b9-47a0-abe3-d213994382dc\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.284756 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-config\") pod \"ae408ee7-67b9-47a0-abe3-d213994382dc\" (UID: \"ae408ee7-67b9-47a0-abe3-d213994382dc\") " Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.308701 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae408ee7-67b9-47a0-abe3-d213994382dc-kube-api-access-nhjks" (OuterVolumeSpecName: "kube-api-access-nhjks") pod "ae408ee7-67b9-47a0-abe3-d213994382dc" (UID: "ae408ee7-67b9-47a0-abe3-d213994382dc"). InnerVolumeSpecName "kube-api-access-nhjks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.333317 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae408ee7-67b9-47a0-abe3-d213994382dc" (UID: "ae408ee7-67b9-47a0-abe3-d213994382dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.333706 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae408ee7-67b9-47a0-abe3-d213994382dc" (UID: "ae408ee7-67b9-47a0-abe3-d213994382dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.346011 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae408ee7-67b9-47a0-abe3-d213994382dc" (UID: "ae408ee7-67b9-47a0-abe3-d213994382dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.356068 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-config" (OuterVolumeSpecName: "config") pod "ae408ee7-67b9-47a0-abe3-d213994382dc" (UID: "ae408ee7-67b9-47a0-abe3-d213994382dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387522 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-lock\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387570 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387593 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387677 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qqd4\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-kube-api-access-2qqd4\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387727 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387761 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-cache\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387823 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387833 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387843 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387853 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhjks\" (UniqueName: \"kubernetes.io/projected/ae408ee7-67b9-47a0-abe3-d213994382dc-kube-api-access-nhjks\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.387864 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae408ee7-67b9-47a0-abe3-d213994382dc-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.489610 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qqd4\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-kube-api-access-2qqd4\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.489690 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.489722 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-cache\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.489769 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-lock\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.489787 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.489813 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: E0218 19:55:11.490147 5007 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:55:11 crc kubenswrapper[5007]: E0218 19:55:11.490172 5007 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.490184 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: E0218 19:55:11.490218 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift podName:d7b23e47-cf61-4c51-81b1-f9f7357c12b8 nodeName:}" failed. No retries permitted until 2026-02-18 19:55:11.990204376 +0000 UTC m=+996.772323900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift") pod "swift-storage-0" (UID: "d7b23e47-cf61-4c51-81b1-f9f7357c12b8") : configmap "swift-ring-files" not found Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.490294 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-cache\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.490346 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-lock\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.501486 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.517019 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qqd4\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-kube-api-access-2qqd4\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.518224 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.745694 5007 generic.go:334] "Generic (PLEG): container finished" podID="ae408ee7-67b9-47a0-abe3-d213994382dc" containerID="3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe" exitCode=0 Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.745766 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" event={"ID":"ae408ee7-67b9-47a0-abe3-d213994382dc","Type":"ContainerDied","Data":"3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe"} Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.745798 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" event={"ID":"ae408ee7-67b9-47a0-abe3-d213994382dc","Type":"ContainerDied","Data":"74fec85481d2de805299f2d3e3ce69c28bd8e76f2c6dcde3fd97117c4c5be449"} Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.745818 5007 scope.go:117] "RemoveContainer" containerID="3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.745956 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599f5467c5-7mv4v" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.753563 5007 generic.go:334] "Generic (PLEG): container finished" podID="8659fece-e93f-4032-9050-659bc594f86b" containerID="a506f4f3a618ed9b05a0b0e06053ff58f5a916ba4d4f824c54678ec2561cb270" exitCode=0 Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.753612 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" event={"ID":"8659fece-e93f-4032-9050-659bc594f86b","Type":"ContainerDied","Data":"a506f4f3a618ed9b05a0b0e06053ff58f5a916ba4d4f824c54678ec2561cb270"} Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.753646 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" event={"ID":"8659fece-e93f-4032-9050-659bc594f86b","Type":"ContainerStarted","Data":"f16d49881219ac9b71743cdbf235634c401322ce95c2b5a7ff4ccf13b9066b6f"} Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.780014 5007 scope.go:117] "RemoveContainer" containerID="870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.811969 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ph9tp"] Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.813061 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.815795 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.816082 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.816323 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.821600 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ph9tp"] Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.859626 5007 scope.go:117] "RemoveContainer" containerID="3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe" Feb 18 19:55:11 crc kubenswrapper[5007]: E0218 19:55:11.860096 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe\": container with ID starting with 3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe not found: ID does not exist" containerID="3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.860137 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe"} err="failed to get container status \"3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe\": rpc error: code = NotFound desc = could not find container \"3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe\": container with ID starting with 3219fa9ea033a6c835a9266d995b5d34a6dfe8eb5aed1bc887a5558a8f248efe not found: ID does not exist" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.860166 5007 scope.go:117] "RemoveContainer" containerID="870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181" Feb 18 19:55:11 crc kubenswrapper[5007]: E0218 19:55:11.860399 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181\": container with ID starting with 870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181 not found: ID does not exist" containerID="870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.860425 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181"} err="failed to get container status \"870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181\": rpc error: code = NotFound desc = could not find container \"870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181\": container with ID starting with 870c8bad4ea89eedcd1df466d38e5cbf60844da05232fa3511cb81cb86521181 not found: ID does not exist" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.862870 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-7mv4v"] Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.870821 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-599f5467c5-7mv4v"] Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.895783 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-dispersionconf\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.895822 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5p8\" (UniqueName: \"kubernetes.io/projected/62901f6c-6a4b-4bdb-8913-167f40aa0e66-kube-api-access-qg5p8\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.895843 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/62901f6c-6a4b-4bdb-8913-167f40aa0e66-etc-swift\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.895897 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-ring-data-devices\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.895933 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-swiftconf\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.895970 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-scripts\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.895988 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-combined-ca-bundle\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:11 crc kubenswrapper[5007]: I0218 19:55:11.918691 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae408ee7-67b9-47a0-abe3-d213994382dc" path="/var/lib/kubelet/pods/ae408ee7-67b9-47a0-abe3-d213994382dc/volumes" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.011415 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-dispersionconf\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.011502 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg5p8\" (UniqueName: \"kubernetes.io/projected/62901f6c-6a4b-4bdb-8913-167f40aa0e66-kube-api-access-qg5p8\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.011554 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/62901f6c-6a4b-4bdb-8913-167f40aa0e66-etc-swift\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.012000 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/62901f6c-6a4b-4bdb-8913-167f40aa0e66-etc-swift\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.011656 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-ring-data-devices\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.012304 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-swiftconf\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.012371 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-scripts\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.012398 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-combined-ca-bundle\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.012419 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.012556 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-ring-data-devices\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: E0218 19:55:12.012637 5007 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:55:12 crc kubenswrapper[5007]: E0218 19:55:12.012654 5007 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:55:12 crc kubenswrapper[5007]: E0218 19:55:12.012691 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift podName:d7b23e47-cf61-4c51-81b1-f9f7357c12b8 nodeName:}" failed. No retries permitted until 2026-02-18 19:55:13.012678405 +0000 UTC m=+997.794797919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift") pod "swift-storage-0" (UID: "d7b23e47-cf61-4c51-81b1-f9f7357c12b8") : configmap "swift-ring-files" not found Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.013024 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-scripts\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.014577 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-dispersionconf\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.015384 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-swiftconf\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.016008 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-combined-ca-bundle\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.042009 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg5p8\" (UniqueName: \"kubernetes.io/projected/62901f6c-6a4b-4bdb-8913-167f40aa0e66-kube-api-access-qg5p8\") pod \"swift-ring-rebalance-ph9tp\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.161269 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.606925 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ph9tp"] Feb 18 19:55:12 crc kubenswrapper[5007]: W0218 19:55:12.609233 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62901f6c_6a4b_4bdb_8913_167f40aa0e66.slice/crio-edcca6b30042e7ce32b06ff79b44e9c50662077a0ab94e655c06663c9ec09761 WatchSource:0}: Error finding container edcca6b30042e7ce32b06ff79b44e9c50662077a0ab94e655c06663c9ec09761: Status 404 returned error can't find the container with id edcca6b30042e7ce32b06ff79b44e9c50662077a0ab94e655c06663c9ec09761 Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.775232 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ph9tp" event={"ID":"62901f6c-6a4b-4bdb-8913-167f40aa0e66","Type":"ContainerStarted","Data":"edcca6b30042e7ce32b06ff79b44e9c50662077a0ab94e655c06663c9ec09761"} Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.777633 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" event={"ID":"8659fece-e93f-4032-9050-659bc594f86b","Type":"ContainerStarted","Data":"6a2cdb9c89efabe8b4e4dc262bb9b9a7aa539bfa446ec435f788a22351f1a1d1"} Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.777853 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:12 crc kubenswrapper[5007]: I0218 19:55:12.809199 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" podStartSLOduration=2.809179567 podStartE2EDuration="2.809179567s" podCreationTimestamp="2026-02-18 19:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:12.800607153 +0000 UTC m=+997.582726687" watchObservedRunningTime="2026-02-18 19:55:12.809179567 +0000 UTC m=+997.591299091" Feb 18 19:55:12 crc kubenswrapper[5007]: E0218 19:55:12.933731 5007 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.132:47886->38.102.83.132:41997: read tcp 38.102.83.132:47886->38.102.83.132:41997: read: connection reset by peer Feb 18 19:55:13 crc kubenswrapper[5007]: I0218 19:55:13.033970 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:13 crc kubenswrapper[5007]: E0218 19:55:13.035620 5007 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:55:13 crc kubenswrapper[5007]: E0218 19:55:13.035645 5007 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:55:13 crc kubenswrapper[5007]: E0218 19:55:13.035689 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift podName:d7b23e47-cf61-4c51-81b1-f9f7357c12b8 nodeName:}" failed. No retries permitted until 2026-02-18 19:55:15.035671906 +0000 UTC m=+999.817791430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift") pod "swift-storage-0" (UID: "d7b23e47-cf61-4c51-81b1-f9f7357c12b8") : configmap "swift-ring-files" not found Feb 18 19:55:15 crc kubenswrapper[5007]: I0218 19:55:15.079199 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:15 crc kubenswrapper[5007]: E0218 19:55:15.079336 5007 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:55:15 crc kubenswrapper[5007]: E0218 19:55:15.079620 5007 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:55:15 crc kubenswrapper[5007]: E0218 19:55:15.081319 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift podName:d7b23e47-cf61-4c51-81b1-f9f7357c12b8 nodeName:}" failed. No retries permitted until 2026-02-18 19:55:19.081293108 +0000 UTC m=+1003.863412732 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift") pod "swift-storage-0" (UID: "d7b23e47-cf61-4c51-81b1-f9f7357c12b8") : configmap "swift-ring-files" not found Feb 18 19:55:16 crc kubenswrapper[5007]: I0218 19:55:16.322756 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 19:55:16 crc kubenswrapper[5007]: I0218 19:55:16.323560 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 19:55:16 crc kubenswrapper[5007]: I0218 19:55:16.462561 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 19:55:16 crc kubenswrapper[5007]: I0218 19:55:16.929414 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 19:55:17 crc kubenswrapper[5007]: I0218 19:55:17.450111 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 19:55:17 crc kubenswrapper[5007]: I0218 19:55:17.450190 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 19:55:17 crc kubenswrapper[5007]: I0218 19:55:17.578672 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 19:55:18 crc kubenswrapper[5007]: I0218 19:55:18.003178 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.157033 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-v9r5p"] Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.158588 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.159808 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:19 crc kubenswrapper[5007]: E0218 19:55:19.160305 5007 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:55:19 crc kubenswrapper[5007]: E0218 19:55:19.160474 5007 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:55:19 crc kubenswrapper[5007]: E0218 19:55:19.160674 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift podName:d7b23e47-cf61-4c51-81b1-f9f7357c12b8 nodeName:}" failed. No retries permitted until 2026-02-18 19:55:27.160646415 +0000 UTC m=+1011.942765959 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift") pod "swift-storage-0" (UID: "d7b23e47-cf61-4c51-81b1-f9f7357c12b8") : configmap "swift-ring-files" not found Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.169559 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2827-account-create-update-bnz5f"] Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.171379 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.173071 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.178237 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v9r5p"] Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.211270 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2827-account-create-update-bnz5f"] Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.261915 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mqjk\" (UniqueName: \"kubernetes.io/projected/08430e05-3044-4b57-bbda-98dbf36842eb-kube-api-access-7mqjk\") pod \"keystone-db-create-v9r5p\" (UID: \"08430e05-3044-4b57-bbda-98dbf36842eb\") " pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.262044 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk7m\" (UniqueName: \"kubernetes.io/projected/d15bfff7-0338-47e8-bdd1-4ab3241829b0-kube-api-access-2fk7m\") pod \"keystone-2827-account-create-update-bnz5f\" (UID: \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\") " pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.262080 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08430e05-3044-4b57-bbda-98dbf36842eb-operator-scripts\") pod \"keystone-db-create-v9r5p\" (UID: \"08430e05-3044-4b57-bbda-98dbf36842eb\") " pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.262359 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15bfff7-0338-47e8-bdd1-4ab3241829b0-operator-scripts\") pod \"keystone-2827-account-create-update-bnz5f\" (UID: \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\") " pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.348791 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gxqz9"] Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.350224 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.356858 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ec0f-account-create-update-sgpb5"] Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.358183 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.359779 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.363650 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mqjk\" (UniqueName: \"kubernetes.io/projected/08430e05-3044-4b57-bbda-98dbf36842eb-kube-api-access-7mqjk\") pod \"keystone-db-create-v9r5p\" (UID: \"08430e05-3044-4b57-bbda-98dbf36842eb\") " pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.363759 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk7m\" (UniqueName: \"kubernetes.io/projected/d15bfff7-0338-47e8-bdd1-4ab3241829b0-kube-api-access-2fk7m\") pod \"keystone-2827-account-create-update-bnz5f\" (UID: \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\") " pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.363799 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08430e05-3044-4b57-bbda-98dbf36842eb-operator-scripts\") pod \"keystone-db-create-v9r5p\" (UID: \"08430e05-3044-4b57-bbda-98dbf36842eb\") " pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.363862 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15bfff7-0338-47e8-bdd1-4ab3241829b0-operator-scripts\") pod \"keystone-2827-account-create-update-bnz5f\" (UID: \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\") " pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.364694 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15bfff7-0338-47e8-bdd1-4ab3241829b0-operator-scripts\") pod \"keystone-2827-account-create-update-bnz5f\" (UID: \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\") " pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.364713 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08430e05-3044-4b57-bbda-98dbf36842eb-operator-scripts\") pod \"keystone-db-create-v9r5p\" (UID: \"08430e05-3044-4b57-bbda-98dbf36842eb\") " pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.368238 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gxqz9"] Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.374759 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec0f-account-create-update-sgpb5"] Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.391083 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mqjk\" (UniqueName: \"kubernetes.io/projected/08430e05-3044-4b57-bbda-98dbf36842eb-kube-api-access-7mqjk\") pod \"keystone-db-create-v9r5p\" (UID: \"08430e05-3044-4b57-bbda-98dbf36842eb\") " pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.396885 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk7m\" (UniqueName: \"kubernetes.io/projected/d15bfff7-0338-47e8-bdd1-4ab3241829b0-kube-api-access-2fk7m\") pod \"keystone-2827-account-create-update-bnz5f\" (UID: \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\") " pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.464820 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kcs7\" (UniqueName: \"kubernetes.io/projected/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-kube-api-access-8kcs7\") pod \"placement-ec0f-account-create-update-sgpb5\" (UID: \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\") " pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.464863 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-operator-scripts\") pod \"placement-ec0f-account-create-update-sgpb5\" (UID: \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\") " pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.465005 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdn8\" (UniqueName: \"kubernetes.io/projected/53910845-3e44-4200-8277-3e00ead09fc8-kube-api-access-jmdn8\") pod \"placement-db-create-gxqz9\" (UID: \"53910845-3e44-4200-8277-3e00ead09fc8\") " pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.465064 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53910845-3e44-4200-8277-3e00ead09fc8-operator-scripts\") pod \"placement-db-create-gxqz9\" (UID: \"53910845-3e44-4200-8277-3e00ead09fc8\") " pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.510051 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.518546 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.566158 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmdn8\" (UniqueName: \"kubernetes.io/projected/53910845-3e44-4200-8277-3e00ead09fc8-kube-api-access-jmdn8\") pod \"placement-db-create-gxqz9\" (UID: \"53910845-3e44-4200-8277-3e00ead09fc8\") " pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.566295 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53910845-3e44-4200-8277-3e00ead09fc8-operator-scripts\") pod \"placement-db-create-gxqz9\" (UID: \"53910845-3e44-4200-8277-3e00ead09fc8\") " pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.566382 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kcs7\" (UniqueName: \"kubernetes.io/projected/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-kube-api-access-8kcs7\") pod \"placement-ec0f-account-create-update-sgpb5\" (UID: \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\") " pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.566422 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-operator-scripts\") pod \"placement-ec0f-account-create-update-sgpb5\" (UID: \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\") " pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.567592 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53910845-3e44-4200-8277-3e00ead09fc8-operator-scripts\") pod \"placement-db-create-gxqz9\" (UID: \"53910845-3e44-4200-8277-3e00ead09fc8\") " pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.567591 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-operator-scripts\") pod \"placement-ec0f-account-create-update-sgpb5\" (UID: \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\") " pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.582361 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmdn8\" (UniqueName: \"kubernetes.io/projected/53910845-3e44-4200-8277-3e00ead09fc8-kube-api-access-jmdn8\") pod \"placement-db-create-gxqz9\" (UID: \"53910845-3e44-4200-8277-3e00ead09fc8\") " pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.586665 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kcs7\" (UniqueName: \"kubernetes.io/projected/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-kube-api-access-8kcs7\") pod \"placement-ec0f-account-create-update-sgpb5\" (UID: \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\") " pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.672464 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:19 crc kubenswrapper[5007]: I0218 19:55:19.694645 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.177925 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-n5t86"] Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.184422 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.188224 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-n5t86"] Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.212181 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-6136-account-create-update-d9mk4"] Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.214255 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.216912 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.236352 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-6136-account-create-update-d9mk4"] Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.281257 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblwd\" (UniqueName: \"kubernetes.io/projected/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-kube-api-access-rblwd\") pod \"watcher-db-create-n5t86\" (UID: \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\") " pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.281313 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vbw\" (UniqueName: \"kubernetes.io/projected/a0038747-eb9f-4f51-b252-52d9b594fe41-kube-api-access-h9vbw\") pod \"watcher-6136-account-create-update-d9mk4\" (UID: \"a0038747-eb9f-4f51-b252-52d9b594fe41\") " pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.281402 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-operator-scripts\") pod \"watcher-db-create-n5t86\" (UID: \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\") " pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.281515 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0038747-eb9f-4f51-b252-52d9b594fe41-operator-scripts\") pod \"watcher-6136-account-create-update-d9mk4\" (UID: \"a0038747-eb9f-4f51-b252-52d9b594fe41\") " pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.382777 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9vbw\" (UniqueName: \"kubernetes.io/projected/a0038747-eb9f-4f51-b252-52d9b594fe41-kube-api-access-h9vbw\") pod \"watcher-6136-account-create-update-d9mk4\" (UID: \"a0038747-eb9f-4f51-b252-52d9b594fe41\") " pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.383097 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-operator-scripts\") pod \"watcher-db-create-n5t86\" (UID: \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\") " pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.383248 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0038747-eb9f-4f51-b252-52d9b594fe41-operator-scripts\") pod \"watcher-6136-account-create-update-d9mk4\" (UID: \"a0038747-eb9f-4f51-b252-52d9b594fe41\") " pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.383376 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rblwd\" (UniqueName: \"kubernetes.io/projected/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-kube-api-access-rblwd\") pod \"watcher-db-create-n5t86\" (UID: \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\") " pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.384168 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0038747-eb9f-4f51-b252-52d9b594fe41-operator-scripts\") pod \"watcher-6136-account-create-update-d9mk4\" (UID: \"a0038747-eb9f-4f51-b252-52d9b594fe41\") " pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.384443 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-operator-scripts\") pod \"watcher-db-create-n5t86\" (UID: \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\") " pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.408071 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rblwd\" (UniqueName: \"kubernetes.io/projected/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-kube-api-access-rblwd\") pod \"watcher-db-create-n5t86\" (UID: \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\") " pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.409242 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9vbw\" (UniqueName: \"kubernetes.io/projected/a0038747-eb9f-4f51-b252-52d9b594fe41-kube-api-access-h9vbw\") pod \"watcher-6136-account-create-update-d9mk4\" (UID: \"a0038747-eb9f-4f51-b252-52d9b594fe41\") " pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.464752 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.606876 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.609290 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.637314 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766f65774c-c47q7"] Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.637537 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-766f65774c-c47q7" podUID="debb8181-4165-44fd-83ed-0698c664676a" containerName="dnsmasq-dns" containerID="cri-o://74c2b22c662150ada6d8d0e3893554cce11e3aa395f5dbd315c3aa22b37a8e62" gracePeriod=10 Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.867989 5007 generic.go:334] "Generic (PLEG): container finished" podID="debb8181-4165-44fd-83ed-0698c664676a" containerID="74c2b22c662150ada6d8d0e3893554cce11e3aa395f5dbd315c3aa22b37a8e62" exitCode=0 Feb 18 19:55:20 crc kubenswrapper[5007]: I0218 19:55:20.868030 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766f65774c-c47q7" event={"ID":"debb8181-4165-44fd-83ed-0698c664676a","Type":"ContainerDied","Data":"74c2b22c662150ada6d8d0e3893554cce11e3aa395f5dbd315c3aa22b37a8e62"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.270160 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.426726 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-config\") pod \"debb8181-4165-44fd-83ed-0698c664676a\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.427123 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-dns-svc\") pod \"debb8181-4165-44fd-83ed-0698c664676a\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.427264 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntmt\" (UniqueName: \"kubernetes.io/projected/debb8181-4165-44fd-83ed-0698c664676a-kube-api-access-bntmt\") pod \"debb8181-4165-44fd-83ed-0698c664676a\" (UID: \"debb8181-4165-44fd-83ed-0698c664676a\") " Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.433872 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debb8181-4165-44fd-83ed-0698c664676a-kube-api-access-bntmt" (OuterVolumeSpecName: "kube-api-access-bntmt") pod "debb8181-4165-44fd-83ed-0698c664676a" (UID: "debb8181-4165-44fd-83ed-0698c664676a"). InnerVolumeSpecName "kube-api-access-bntmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.474130 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "debb8181-4165-44fd-83ed-0698c664676a" (UID: "debb8181-4165-44fd-83ed-0698c664676a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.475020 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-config" (OuterVolumeSpecName: "config") pod "debb8181-4165-44fd-83ed-0698c664676a" (UID: "debb8181-4165-44fd-83ed-0698c664676a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.529502 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.529545 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/debb8181-4165-44fd-83ed-0698c664676a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.529559 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bntmt\" (UniqueName: \"kubernetes.io/projected/debb8181-4165-44fd-83ed-0698c664676a-kube-api-access-bntmt\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.546461 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2827-account-create-update-bnz5f"] Feb 18 19:55:21 crc kubenswrapper[5007]: W0218 19:55:21.551275 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd15bfff7_0338_47e8_bdd1_4ab3241829b0.slice/crio-6343766bf8e3343d07caf7977dc69458bef182721dd0e6164d752b55a507c8d3 WatchSource:0}: Error finding container 6343766bf8e3343d07caf7977dc69458bef182721dd0e6164d752b55a507c8d3: Status 404 returned error can't find the container with id 6343766bf8e3343d07caf7977dc69458bef182721dd0e6164d752b55a507c8d3 Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.579344 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v9r5p"] Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.696069 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec0f-account-create-update-sgpb5"] Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.705955 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-n5t86"] Feb 18 19:55:21 crc kubenswrapper[5007]: W0218 19:55:21.709241 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod804d707b_d8ff_4ae9_883d_2f89e84bc9bd.slice/crio-784e9a0db51bcbfcae561d345de08a423e71184ed34cb6e69cd2670fcb6a575b WatchSource:0}: Error finding container 784e9a0db51bcbfcae561d345de08a423e71184ed34cb6e69cd2670fcb6a575b: Status 404 returned error can't find the container with id 784e9a0db51bcbfcae561d345de08a423e71184ed34cb6e69cd2670fcb6a575b Feb 18 19:55:21 crc kubenswrapper[5007]: W0218 19:55:21.713954 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0038747_eb9f_4f51_b252_52d9b594fe41.slice/crio-33cfd45c3208a061dfbd09263b2399c8ec20cd277e204c5762c31a9b990f1ee0 WatchSource:0}: Error finding container 33cfd45c3208a061dfbd09263b2399c8ec20cd277e204c5762c31a9b990f1ee0: Status 404 returned error can't find the container with id 33cfd45c3208a061dfbd09263b2399c8ec20cd277e204c5762c31a9b990f1ee0 Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.724289 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-6136-account-create-update-d9mk4"] Feb 18 19:55:21 crc kubenswrapper[5007]: W0218 19:55:21.731249 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53910845_3e44_4200_8277_3e00ead09fc8.slice/crio-fb6bf7367af1bc7b492ecd4e42cab7299d50b81e57e2b6e955e7f747eecea7b2 WatchSource:0}: Error finding container fb6bf7367af1bc7b492ecd4e42cab7299d50b81e57e2b6e955e7f747eecea7b2: Status 404 returned error can't find the container with id fb6bf7367af1bc7b492ecd4e42cab7299d50b81e57e2b6e955e7f747eecea7b2 Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.733500 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gxqz9"] Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.875819 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2827-account-create-update-bnz5f" event={"ID":"d15bfff7-0338-47e8-bdd1-4ab3241829b0","Type":"ContainerStarted","Data":"50ba0e9a88e842147470f3ad82a36f19d8e4b93517b965706ff7298bff67660d"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.875860 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2827-account-create-update-bnz5f" event={"ID":"d15bfff7-0338-47e8-bdd1-4ab3241829b0","Type":"ContainerStarted","Data":"6343766bf8e3343d07caf7977dc69458bef182721dd0e6164d752b55a507c8d3"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.879507 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v9r5p" event={"ID":"08430e05-3044-4b57-bbda-98dbf36842eb","Type":"ContainerStarted","Data":"2134c6cbc64db4fd062eb3639ac1ee56ecd91b0fc59fc2af441bf04bc3da4b7f"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.879548 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v9r5p" event={"ID":"08430e05-3044-4b57-bbda-98dbf36842eb","Type":"ContainerStarted","Data":"7f10aac0232a21669f12b285ba92d35156c60c0849bdc017588cd604620de95c"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.880930 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ph9tp" event={"ID":"62901f6c-6a4b-4bdb-8913-167f40aa0e66","Type":"ContainerStarted","Data":"7fbe6d71a84db3b26fb321e61a00dfd53cc7363d77b6edcaa2480a1a321b4df0"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.882494 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec0f-account-create-update-sgpb5" event={"ID":"d690d55d-b5a9-4066-ab9c-c8ddb23288b8","Type":"ContainerStarted","Data":"455035d9de57b7d887dfb216cd8a09647b95270e5b8f9ee3aefa2969022ce0a1"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.883443 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gxqz9" event={"ID":"53910845-3e44-4200-8277-3e00ead09fc8","Type":"ContainerStarted","Data":"fb6bf7367af1bc7b492ecd4e42cab7299d50b81e57e2b6e955e7f747eecea7b2"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.884671 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-n5t86" event={"ID":"804d707b-d8ff-4ae9-883d-2f89e84bc9bd","Type":"ContainerStarted","Data":"ea34ae493fec062986f372fa8b9b978212f9d7929c511f1ee113d9ceb94fdda2"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.884761 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-n5t86" event={"ID":"804d707b-d8ff-4ae9-883d-2f89e84bc9bd","Type":"ContainerStarted","Data":"784e9a0db51bcbfcae561d345de08a423e71184ed34cb6e69cd2670fcb6a575b"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.914765 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-2827-account-create-update-bnz5f" podStartSLOduration=2.9147451220000002 podStartE2EDuration="2.914745122s" podCreationTimestamp="2026-02-18 19:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:21.89112947 +0000 UTC m=+1006.673248994" watchObservedRunningTime="2026-02-18 19:55:21.914745122 +0000 UTC m=+1006.696864646" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.916238 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766f65774c-c47q7" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.923813 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-v9r5p" podStartSLOduration=2.92379669 podStartE2EDuration="2.92379669s" podCreationTimestamp="2026-02-18 19:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:21.905664344 +0000 UTC m=+1006.687783888" watchObservedRunningTime="2026-02-18 19:55:21.92379669 +0000 UTC m=+1006.705916214" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.926759 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-n5t86" podStartSLOduration=1.926751854 podStartE2EDuration="1.926751854s" podCreationTimestamp="2026-02-18 19:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:21.918210611 +0000 UTC m=+1006.700330125" watchObservedRunningTime="2026-02-18 19:55:21.926751854 +0000 UTC m=+1006.708871378" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.927383 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerStarted","Data":"ee8de0c4d928110b7d1fa4f43b45f63d435e7d9563a3775b6de79857e773fb00"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.927423 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-6136-account-create-update-d9mk4" event={"ID":"a0038747-eb9f-4f51-b252-52d9b594fe41","Type":"ContainerStarted","Data":"33cfd45c3208a061dfbd09263b2399c8ec20cd277e204c5762c31a9b990f1ee0"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.927451 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766f65774c-c47q7" event={"ID":"debb8181-4165-44fd-83ed-0698c664676a","Type":"ContainerDied","Data":"585ffc1310d040cf82ecea60329f0e007937231a724f313a407a772b91fed01c"} Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.927584 5007 scope.go:117] "RemoveContainer" containerID="74c2b22c662150ada6d8d0e3893554cce11e3aa395f5dbd315c3aa22b37a8e62" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.944021 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ph9tp" podStartSLOduration=2.521302395 podStartE2EDuration="10.944000665s" podCreationTimestamp="2026-02-18 19:55:11 +0000 UTC" firstStartedPulling="2026-02-18 19:55:12.611379094 +0000 UTC m=+997.393498628" lastFinishedPulling="2026-02-18 19:55:21.034077374 +0000 UTC m=+1005.816196898" observedRunningTime="2026-02-18 19:55:21.932395225 +0000 UTC m=+1006.714514749" watchObservedRunningTime="2026-02-18 19:55:21.944000665 +0000 UTC m=+1006.726120209" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.951651 5007 scope.go:117] "RemoveContainer" containerID="bb1181fe8309f12924ae3409c60039f4cfa16b16402666d004b4453ec7324386" Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.958898 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766f65774c-c47q7"] Feb 18 19:55:21 crc kubenswrapper[5007]: I0218 19:55:21.977355 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-766f65774c-c47q7"] Feb 18 19:55:22 crc kubenswrapper[5007]: I0218 19:55:22.930832 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec0f-account-create-update-sgpb5" event={"ID":"d690d55d-b5a9-4066-ab9c-c8ddb23288b8","Type":"ContainerStarted","Data":"964259f4349869ce443d8a17966f4a9967b3d422a54db3a76bb55321a160ffcc"} Feb 18 19:55:22 crc kubenswrapper[5007]: I0218 19:55:22.932827 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gxqz9" event={"ID":"53910845-3e44-4200-8277-3e00ead09fc8","Type":"ContainerStarted","Data":"084761def3e3f9f96312615afb37ba8fd44a625616dc4c82afaafce1086c6fd3"} Feb 18 19:55:22 crc kubenswrapper[5007]: I0218 19:55:22.934300 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-6136-account-create-update-d9mk4" event={"ID":"a0038747-eb9f-4f51-b252-52d9b594fe41","Type":"ContainerStarted","Data":"a05d673fbff11cc7234b18012d035f6bb1e674b0f0eafc2f7af2082ffae97e67"} Feb 18 19:55:22 crc kubenswrapper[5007]: I0218 19:55:22.955525 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ec0f-account-create-update-sgpb5" podStartSLOduration=3.95550817 podStartE2EDuration="3.95550817s" podCreationTimestamp="2026-02-18 19:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:22.94952338 +0000 UTC m=+1007.731642944" watchObservedRunningTime="2026-02-18 19:55:22.95550817 +0000 UTC m=+1007.737627694" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.136425 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ml6mz"] Feb 18 19:55:23 crc kubenswrapper[5007]: E0218 19:55:23.137384 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debb8181-4165-44fd-83ed-0698c664676a" containerName="dnsmasq-dns" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.137420 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="debb8181-4165-44fd-83ed-0698c664676a" containerName="dnsmasq-dns" Feb 18 19:55:23 crc kubenswrapper[5007]: E0218 19:55:23.137466 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debb8181-4165-44fd-83ed-0698c664676a" containerName="init" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.137479 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="debb8181-4165-44fd-83ed-0698c664676a" containerName="init" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.137820 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="debb8181-4165-44fd-83ed-0698c664676a" containerName="dnsmasq-dns" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.140043 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.148575 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ml6mz"] Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.162936 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ztnr\" (UniqueName: \"kubernetes.io/projected/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-kube-api-access-4ztnr\") pod \"glance-db-create-ml6mz\" (UID: \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\") " pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.163066 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-operator-scripts\") pod \"glance-db-create-ml6mz\" (UID: \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\") " pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.210902 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a0ac-account-create-update-k24dv"] Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.213460 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.217928 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.228221 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a0ac-account-create-update-k24dv"] Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.265162 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ztnr\" (UniqueName: \"kubernetes.io/projected/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-kube-api-access-4ztnr\") pod \"glance-db-create-ml6mz\" (UID: \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\") " pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.265271 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6shn\" (UniqueName: \"kubernetes.io/projected/fdee3087-3a04-4f50-b310-26ac36c83aa7-kube-api-access-l6shn\") pod \"glance-a0ac-account-create-update-k24dv\" (UID: \"fdee3087-3a04-4f50-b310-26ac36c83aa7\") " pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.265396 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-operator-scripts\") pod \"glance-db-create-ml6mz\" (UID: \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\") " pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.265507 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdee3087-3a04-4f50-b310-26ac36c83aa7-operator-scripts\") pod \"glance-a0ac-account-create-update-k24dv\" (UID: \"fdee3087-3a04-4f50-b310-26ac36c83aa7\") " pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.266750 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-operator-scripts\") pod \"glance-db-create-ml6mz\" (UID: \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\") " pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.293828 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ztnr\" (UniqueName: \"kubernetes.io/projected/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-kube-api-access-4ztnr\") pod \"glance-db-create-ml6mz\" (UID: \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\") " pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.366349 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdee3087-3a04-4f50-b310-26ac36c83aa7-operator-scripts\") pod \"glance-a0ac-account-create-update-k24dv\" (UID: \"fdee3087-3a04-4f50-b310-26ac36c83aa7\") " pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.366865 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6shn\" (UniqueName: \"kubernetes.io/projected/fdee3087-3a04-4f50-b310-26ac36c83aa7-kube-api-access-l6shn\") pod \"glance-a0ac-account-create-update-k24dv\" (UID: \"fdee3087-3a04-4f50-b310-26ac36c83aa7\") " pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.367049 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdee3087-3a04-4f50-b310-26ac36c83aa7-operator-scripts\") pod \"glance-a0ac-account-create-update-k24dv\" (UID: \"fdee3087-3a04-4f50-b310-26ac36c83aa7\") " pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.396997 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6shn\" (UniqueName: \"kubernetes.io/projected/fdee3087-3a04-4f50-b310-26ac36c83aa7-kube-api-access-l6shn\") pod \"glance-a0ac-account-create-update-k24dv\" (UID: \"fdee3087-3a04-4f50-b310-26ac36c83aa7\") " pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.485293 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.534625 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.927642 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debb8181-4165-44fd-83ed-0698c664676a" path="/var/lib/kubelet/pods/debb8181-4165-44fd-83ed-0698c664676a/volumes" Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.945116 5007 generic.go:334] "Generic (PLEG): container finished" podID="d15bfff7-0338-47e8-bdd1-4ab3241829b0" containerID="50ba0e9a88e842147470f3ad82a36f19d8e4b93517b965706ff7298bff67660d" exitCode=0 Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.945208 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2827-account-create-update-bnz5f" event={"ID":"d15bfff7-0338-47e8-bdd1-4ab3241829b0","Type":"ContainerDied","Data":"50ba0e9a88e842147470f3ad82a36f19d8e4b93517b965706ff7298bff67660d"} Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.946976 5007 generic.go:334] "Generic (PLEG): container finished" podID="08430e05-3044-4b57-bbda-98dbf36842eb" containerID="2134c6cbc64db4fd062eb3639ac1ee56ecd91b0fc59fc2af441bf04bc3da4b7f" exitCode=0 Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.947033 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v9r5p" event={"ID":"08430e05-3044-4b57-bbda-98dbf36842eb","Type":"ContainerDied","Data":"2134c6cbc64db4fd062eb3639ac1ee56ecd91b0fc59fc2af441bf04bc3da4b7f"} Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.948747 5007 generic.go:334] "Generic (PLEG): container finished" podID="d690d55d-b5a9-4066-ab9c-c8ddb23288b8" containerID="964259f4349869ce443d8a17966f4a9967b3d422a54db3a76bb55321a160ffcc" exitCode=0 Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.948781 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec0f-account-create-update-sgpb5" event={"ID":"d690d55d-b5a9-4066-ab9c-c8ddb23288b8","Type":"ContainerDied","Data":"964259f4349869ce443d8a17966f4a9967b3d422a54db3a76bb55321a160ffcc"} Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.950550 5007 generic.go:334] "Generic (PLEG): container finished" podID="53910845-3e44-4200-8277-3e00ead09fc8" containerID="084761def3e3f9f96312615afb37ba8fd44a625616dc4c82afaafce1086c6fd3" exitCode=0 Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.950598 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gxqz9" event={"ID":"53910845-3e44-4200-8277-3e00ead09fc8","Type":"ContainerDied","Data":"084761def3e3f9f96312615afb37ba8fd44a625616dc4c82afaafce1086c6fd3"} Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.952701 5007 generic.go:334] "Generic (PLEG): container finished" podID="804d707b-d8ff-4ae9-883d-2f89e84bc9bd" containerID="ea34ae493fec062986f372fa8b9b978212f9d7929c511f1ee113d9ceb94fdda2" exitCode=0 Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.952752 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-n5t86" event={"ID":"804d707b-d8ff-4ae9-883d-2f89e84bc9bd","Type":"ContainerDied","Data":"ea34ae493fec062986f372fa8b9b978212f9d7929c511f1ee113d9ceb94fdda2"} Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.954604 5007 generic.go:334] "Generic (PLEG): container finished" podID="a0038747-eb9f-4f51-b252-52d9b594fe41" containerID="a05d673fbff11cc7234b18012d035f6bb1e674b0f0eafc2f7af2082ffae97e67" exitCode=0 Feb 18 19:55:23 crc kubenswrapper[5007]: I0218 19:55:23.954639 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-6136-account-create-update-d9mk4" event={"ID":"a0038747-eb9f-4f51-b252-52d9b594fe41","Type":"ContainerDied","Data":"a05d673fbff11cc7234b18012d035f6bb1e674b0f0eafc2f7af2082ffae97e67"} Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.364685 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a0ac-account-create-update-k24dv"] Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.378255 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ml6mz"] Feb 18 19:55:24 crc kubenswrapper[5007]: W0218 19:55:24.410627 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdee3087_3a04_4f50_b310_26ac36c83aa7.slice/crio-4f9a6dee0b37dd3e75483545ded3017a44c6a69274e939487affea8bc74ebde2 WatchSource:0}: Error finding container 4f9a6dee0b37dd3e75483545ded3017a44c6a69274e939487affea8bc74ebde2: Status 404 returned error can't find the container with id 4f9a6dee0b37dd3e75483545ded3017a44c6a69274e939487affea8bc74ebde2 Feb 18 19:55:24 crc kubenswrapper[5007]: W0218 19:55:24.411246 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8b1c5b4_dd3b_492e_9c6b_52e9bd83e697.slice/crio-fac6b3ad606a510bf80e1d6fa8fd03192d71bcf786bb48153c88d557da22ffb2 WatchSource:0}: Error finding container fac6b3ad606a510bf80e1d6fa8fd03192d71bcf786bb48153c88d557da22ffb2: Status 404 returned error can't find the container with id fac6b3ad606a510bf80e1d6fa8fd03192d71bcf786bb48153c88d557da22ffb2 Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.895100 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6q4hv"] Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.896862 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.900850 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.910200 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9f4x\" (UniqueName: \"kubernetes.io/projected/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-kube-api-access-v9f4x\") pod \"root-account-create-update-6q4hv\" (UID: \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\") " pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.910379 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-operator-scripts\") pod \"root-account-create-update-6q4hv\" (UID: \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\") " pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.911760 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6q4hv"] Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.974674 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerStarted","Data":"4efb07f81baea94c9493e01642b0255485068b2494d607f6ace1a1eef0e4753f"} Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.976662 5007 generic.go:334] "Generic (PLEG): container finished" podID="e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697" containerID="0db9186cccff0e7d57ef5545887a0bd83b4c5a97851ba8efc545ca56a4e9bb25" exitCode=0 Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.976737 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ml6mz" event={"ID":"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697","Type":"ContainerDied","Data":"0db9186cccff0e7d57ef5545887a0bd83b4c5a97851ba8efc545ca56a4e9bb25"} Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.976813 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ml6mz" event={"ID":"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697","Type":"ContainerStarted","Data":"fac6b3ad606a510bf80e1d6fa8fd03192d71bcf786bb48153c88d557da22ffb2"} Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.982617 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0ac-account-create-update-k24dv" event={"ID":"fdee3087-3a04-4f50-b310-26ac36c83aa7","Type":"ContainerStarted","Data":"861a6ebc79295f816e7451f9b86496e7ca01fd747eef5ce761fa0a91440a141e"} Feb 18 19:55:24 crc kubenswrapper[5007]: I0218 19:55:24.982683 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0ac-account-create-update-k24dv" event={"ID":"fdee3087-3a04-4f50-b310-26ac36c83aa7","Type":"ContainerStarted","Data":"4f9a6dee0b37dd3e75483545ded3017a44c6a69274e939487affea8bc74ebde2"} Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.012494 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-operator-scripts\") pod \"root-account-create-update-6q4hv\" (UID: \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\") " pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.012629 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9f4x\" (UniqueName: \"kubernetes.io/projected/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-kube-api-access-v9f4x\") pod \"root-account-create-update-6q4hv\" (UID: \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\") " pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.013556 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-operator-scripts\") pod \"root-account-create-update-6q4hv\" (UID: \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\") " pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.039112 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9f4x\" (UniqueName: \"kubernetes.io/projected/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-kube-api-access-v9f4x\") pod \"root-account-create-update-6q4hv\" (UID: \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\") " pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.222011 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.431685 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.468566 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a0ac-account-create-update-k24dv" podStartSLOduration=2.468396508 podStartE2EDuration="2.468396508s" podCreationTimestamp="2026-02-18 19:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:25.014820442 +0000 UTC m=+1009.796940006" watchObservedRunningTime="2026-02-18 19:55:25.468396508 +0000 UTC m=+1010.250516032" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.521951 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08430e05-3044-4b57-bbda-98dbf36842eb-operator-scripts\") pod \"08430e05-3044-4b57-bbda-98dbf36842eb\" (UID: \"08430e05-3044-4b57-bbda-98dbf36842eb\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.522163 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mqjk\" (UniqueName: \"kubernetes.io/projected/08430e05-3044-4b57-bbda-98dbf36842eb-kube-api-access-7mqjk\") pod \"08430e05-3044-4b57-bbda-98dbf36842eb\" (UID: \"08430e05-3044-4b57-bbda-98dbf36842eb\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.523142 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08430e05-3044-4b57-bbda-98dbf36842eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08430e05-3044-4b57-bbda-98dbf36842eb" (UID: "08430e05-3044-4b57-bbda-98dbf36842eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.530226 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08430e05-3044-4b57-bbda-98dbf36842eb-kube-api-access-7mqjk" (OuterVolumeSpecName: "kube-api-access-7mqjk") pod "08430e05-3044-4b57-bbda-98dbf36842eb" (UID: "08430e05-3044-4b57-bbda-98dbf36842eb"). InnerVolumeSpecName "kube-api-access-7mqjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.566092 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.594099 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.596414 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.614045 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.623407 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fk7m\" (UniqueName: \"kubernetes.io/projected/d15bfff7-0338-47e8-bdd1-4ab3241829b0-kube-api-access-2fk7m\") pod \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\" (UID: \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.625812 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0038747-eb9f-4f51-b252-52d9b594fe41-operator-scripts\") pod \"a0038747-eb9f-4f51-b252-52d9b594fe41\" (UID: \"a0038747-eb9f-4f51-b252-52d9b594fe41\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.625860 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9vbw\" (UniqueName: \"kubernetes.io/projected/a0038747-eb9f-4f51-b252-52d9b594fe41-kube-api-access-h9vbw\") pod \"a0038747-eb9f-4f51-b252-52d9b594fe41\" (UID: \"a0038747-eb9f-4f51-b252-52d9b594fe41\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.625892 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rblwd\" (UniqueName: \"kubernetes.io/projected/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-kube-api-access-rblwd\") pod \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\" (UID: \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.625941 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-operator-scripts\") pod \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\" (UID: \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.625961 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-operator-scripts\") pod \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\" (UID: \"804d707b-d8ff-4ae9-883d-2f89e84bc9bd\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.625999 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15bfff7-0338-47e8-bdd1-4ab3241829b0-operator-scripts\") pod \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\" (UID: \"d15bfff7-0338-47e8-bdd1-4ab3241829b0\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.626050 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kcs7\" (UniqueName: \"kubernetes.io/projected/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-kube-api-access-8kcs7\") pod \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\" (UID: \"d690d55d-b5a9-4066-ab9c-c8ddb23288b8\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.626917 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mqjk\" (UniqueName: \"kubernetes.io/projected/08430e05-3044-4b57-bbda-98dbf36842eb-kube-api-access-7mqjk\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.626932 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08430e05-3044-4b57-bbda-98dbf36842eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.630941 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-kube-api-access-8kcs7" (OuterVolumeSpecName: "kube-api-access-8kcs7") pod "d690d55d-b5a9-4066-ab9c-c8ddb23288b8" (UID: "d690d55d-b5a9-4066-ab9c-c8ddb23288b8"). InnerVolumeSpecName "kube-api-access-8kcs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.633459 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "804d707b-d8ff-4ae9-883d-2f89e84bc9bd" (UID: "804d707b-d8ff-4ae9-883d-2f89e84bc9bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.633875 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0038747-eb9f-4f51-b252-52d9b594fe41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0038747-eb9f-4f51-b252-52d9b594fe41" (UID: "a0038747-eb9f-4f51-b252-52d9b594fe41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.634155 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d690d55d-b5a9-4066-ab9c-c8ddb23288b8" (UID: "d690d55d-b5a9-4066-ab9c-c8ddb23288b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.634242 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15bfff7-0338-47e8-bdd1-4ab3241829b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d15bfff7-0338-47e8-bdd1-4ab3241829b0" (UID: "d15bfff7-0338-47e8-bdd1-4ab3241829b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.635711 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.639767 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15bfff7-0338-47e8-bdd1-4ab3241829b0-kube-api-access-2fk7m" (OuterVolumeSpecName: "kube-api-access-2fk7m") pod "d15bfff7-0338-47e8-bdd1-4ab3241829b0" (UID: "d15bfff7-0338-47e8-bdd1-4ab3241829b0"). InnerVolumeSpecName "kube-api-access-2fk7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.649605 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-kube-api-access-rblwd" (OuterVolumeSpecName: "kube-api-access-rblwd") pod "804d707b-d8ff-4ae9-883d-2f89e84bc9bd" (UID: "804d707b-d8ff-4ae9-883d-2f89e84bc9bd"). InnerVolumeSpecName "kube-api-access-rblwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.661248 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0038747-eb9f-4f51-b252-52d9b594fe41-kube-api-access-h9vbw" (OuterVolumeSpecName: "kube-api-access-h9vbw") pod "a0038747-eb9f-4f51-b252-52d9b594fe41" (UID: "a0038747-eb9f-4f51-b252-52d9b594fe41"). InnerVolumeSpecName "kube-api-access-h9vbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.728365 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmdn8\" (UniqueName: \"kubernetes.io/projected/53910845-3e44-4200-8277-3e00ead09fc8-kube-api-access-jmdn8\") pod \"53910845-3e44-4200-8277-3e00ead09fc8\" (UID: \"53910845-3e44-4200-8277-3e00ead09fc8\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.729391 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53910845-3e44-4200-8277-3e00ead09fc8-operator-scripts\") pod \"53910845-3e44-4200-8277-3e00ead09fc8\" (UID: \"53910845-3e44-4200-8277-3e00ead09fc8\") " Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.730125 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.730229 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.730310 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15bfff7-0338-47e8-bdd1-4ab3241829b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.730373 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kcs7\" (UniqueName: \"kubernetes.io/projected/d690d55d-b5a9-4066-ab9c-c8ddb23288b8-kube-api-access-8kcs7\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.730444 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fk7m\" (UniqueName: \"kubernetes.io/projected/d15bfff7-0338-47e8-bdd1-4ab3241829b0-kube-api-access-2fk7m\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.730537 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0038747-eb9f-4f51-b252-52d9b594fe41-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.730595 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9vbw\" (UniqueName: \"kubernetes.io/projected/a0038747-eb9f-4f51-b252-52d9b594fe41-kube-api-access-h9vbw\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.730652 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rblwd\" (UniqueName: \"kubernetes.io/projected/804d707b-d8ff-4ae9-883d-2f89e84bc9bd-kube-api-access-rblwd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.731146 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53910845-3e44-4200-8277-3e00ead09fc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53910845-3e44-4200-8277-3e00ead09fc8" (UID: "53910845-3e44-4200-8277-3e00ead09fc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.742725 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53910845-3e44-4200-8277-3e00ead09fc8-kube-api-access-jmdn8" (OuterVolumeSpecName: "kube-api-access-jmdn8") pod "53910845-3e44-4200-8277-3e00ead09fc8" (UID: "53910845-3e44-4200-8277-3e00ead09fc8"). InnerVolumeSpecName "kube-api-access-jmdn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.832226 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmdn8\" (UniqueName: \"kubernetes.io/projected/53910845-3e44-4200-8277-3e00ead09fc8-kube-api-access-jmdn8\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.832591 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53910845-3e44-4200-8277-3e00ead09fc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:25 crc kubenswrapper[5007]: W0218 19:55:25.908573 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod790b5bdb_2f09_4266_9b96_3d05f5c4eff6.slice/crio-bd42c522c3e1d6c8ec71bf5f5aa4480c290e4d9fae10910d19220fe293c72973 WatchSource:0}: Error finding container bd42c522c3e1d6c8ec71bf5f5aa4480c290e4d9fae10910d19220fe293c72973: Status 404 returned error can't find the container with id bd42c522c3e1d6c8ec71bf5f5aa4480c290e4d9fae10910d19220fe293c72973 Feb 18 19:55:25 crc kubenswrapper[5007]: I0218 19:55:25.908688 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6q4hv"] Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.021914 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-n5t86" event={"ID":"804d707b-d8ff-4ae9-883d-2f89e84bc9bd","Type":"ContainerDied","Data":"784e9a0db51bcbfcae561d345de08a423e71184ed34cb6e69cd2670fcb6a575b"} Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.021962 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="784e9a0db51bcbfcae561d345de08a423e71184ed34cb6e69cd2670fcb6a575b" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.022056 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-n5t86" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.024661 5007 generic.go:334] "Generic (PLEG): container finished" podID="fdee3087-3a04-4f50-b310-26ac36c83aa7" containerID="861a6ebc79295f816e7451f9b86496e7ca01fd747eef5ce761fa0a91440a141e" exitCode=0 Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.024768 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0ac-account-create-update-k24dv" event={"ID":"fdee3087-3a04-4f50-b310-26ac36c83aa7","Type":"ContainerDied","Data":"861a6ebc79295f816e7451f9b86496e7ca01fd747eef5ce761fa0a91440a141e"} Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.029408 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-6136-account-create-update-d9mk4" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.029081 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-6136-account-create-update-d9mk4" event={"ID":"a0038747-eb9f-4f51-b252-52d9b594fe41","Type":"ContainerDied","Data":"33cfd45c3208a061dfbd09263b2399c8ec20cd277e204c5762c31a9b990f1ee0"} Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.029891 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33cfd45c3208a061dfbd09263b2399c8ec20cd277e204c5762c31a9b990f1ee0" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.031716 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6q4hv" event={"ID":"790b5bdb-2f09-4266-9b96-3d05f5c4eff6","Type":"ContainerStarted","Data":"bd42c522c3e1d6c8ec71bf5f5aa4480c290e4d9fae10910d19220fe293c72973"} Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.033730 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2827-account-create-update-bnz5f" event={"ID":"d15bfff7-0338-47e8-bdd1-4ab3241829b0","Type":"ContainerDied","Data":"6343766bf8e3343d07caf7977dc69458bef182721dd0e6164d752b55a507c8d3"} Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.033755 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6343766bf8e3343d07caf7977dc69458bef182721dd0e6164d752b55a507c8d3" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.033818 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2827-account-create-update-bnz5f" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.035522 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v9r5p" event={"ID":"08430e05-3044-4b57-bbda-98dbf36842eb","Type":"ContainerDied","Data":"7f10aac0232a21669f12b285ba92d35156c60c0849bdc017588cd604620de95c"} Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.035584 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f10aac0232a21669f12b285ba92d35156c60c0849bdc017588cd604620de95c" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.035716 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v9r5p" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.052488 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec0f-account-create-update-sgpb5" event={"ID":"d690d55d-b5a9-4066-ab9c-c8ddb23288b8","Type":"ContainerDied","Data":"455035d9de57b7d887dfb216cd8a09647b95270e5b8f9ee3aefa2969022ce0a1"} Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.052809 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455035d9de57b7d887dfb216cd8a09647b95270e5b8f9ee3aefa2969022ce0a1" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.052538 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec0f-account-create-update-sgpb5" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.062733 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gxqz9" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.063497 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gxqz9" event={"ID":"53910845-3e44-4200-8277-3e00ead09fc8","Type":"ContainerDied","Data":"fb6bf7367af1bc7b492ecd4e42cab7299d50b81e57e2b6e955e7f747eecea7b2"} Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.063533 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb6bf7367af1bc7b492ecd4e42cab7299d50b81e57e2b6e955e7f747eecea7b2" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.422960 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.448334 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ztnr\" (UniqueName: \"kubernetes.io/projected/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-kube-api-access-4ztnr\") pod \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\" (UID: \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\") " Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.448408 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-operator-scripts\") pod \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\" (UID: \"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697\") " Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.449552 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697" (UID: "e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.456235 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-kube-api-access-4ztnr" (OuterVolumeSpecName: "kube-api-access-4ztnr") pod "e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697" (UID: "e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697"). InnerVolumeSpecName "kube-api-access-4ztnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.551108 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ztnr\" (UniqueName: \"kubernetes.io/projected/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-kube-api-access-4ztnr\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.551141 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:26 crc kubenswrapper[5007]: I0218 19:55:26.556328 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 19:55:27 crc kubenswrapper[5007]: I0218 19:55:27.071132 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ml6mz" event={"ID":"e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697","Type":"ContainerDied","Data":"fac6b3ad606a510bf80e1d6fa8fd03192d71bcf786bb48153c88d557da22ffb2"} Feb 18 19:55:27 crc kubenswrapper[5007]: I0218 19:55:27.071155 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ml6mz" Feb 18 19:55:27 crc kubenswrapper[5007]: I0218 19:55:27.071169 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fac6b3ad606a510bf80e1d6fa8fd03192d71bcf786bb48153c88d557da22ffb2" Feb 18 19:55:27 crc kubenswrapper[5007]: I0218 19:55:27.073260 5007 generic.go:334] "Generic (PLEG): container finished" podID="790b5bdb-2f09-4266-9b96-3d05f5c4eff6" containerID="6de973d2441b7f690e0d0127fbae3dec6dfa065033778c59ead353fd491b33b0" exitCode=0 Feb 18 19:55:27 crc kubenswrapper[5007]: I0218 19:55:27.073343 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6q4hv" event={"ID":"790b5bdb-2f09-4266-9b96-3d05f5c4eff6","Type":"ContainerDied","Data":"6de973d2441b7f690e0d0127fbae3dec6dfa065033778c59ead353fd491b33b0"} Feb 18 19:55:27 crc kubenswrapper[5007]: I0218 19:55:27.162382 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:27 crc kubenswrapper[5007]: E0218 19:55:27.162860 5007 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:55:27 crc kubenswrapper[5007]: E0218 19:55:27.162913 5007 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:55:27 crc kubenswrapper[5007]: E0218 19:55:27.163019 5007 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift podName:d7b23e47-cf61-4c51-81b1-f9f7357c12b8 nodeName:}" failed. No retries permitted until 2026-02-18 19:55:43.162991875 +0000 UTC m=+1027.945111399 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift") pod "swift-storage-0" (UID: "d7b23e47-cf61-4c51-81b1-f9f7357c12b8") : configmap "swift-ring-files" not found Feb 18 19:55:27 crc kubenswrapper[5007]: I0218 19:55:27.992304 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.082138 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdee3087-3a04-4f50-b310-26ac36c83aa7-operator-scripts\") pod \"fdee3087-3a04-4f50-b310-26ac36c83aa7\" (UID: \"fdee3087-3a04-4f50-b310-26ac36c83aa7\") " Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.082696 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6shn\" (UniqueName: \"kubernetes.io/projected/fdee3087-3a04-4f50-b310-26ac36c83aa7-kube-api-access-l6shn\") pod \"fdee3087-3a04-4f50-b310-26ac36c83aa7\" (UID: \"fdee3087-3a04-4f50-b310-26ac36c83aa7\") " Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.083042 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdee3087-3a04-4f50-b310-26ac36c83aa7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdee3087-3a04-4f50-b310-26ac36c83aa7" (UID: "fdee3087-3a04-4f50-b310-26ac36c83aa7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.083335 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdee3087-3a04-4f50-b310-26ac36c83aa7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.086041 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0ac-account-create-update-k24dv" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.086211 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0ac-account-create-update-k24dv" event={"ID":"fdee3087-3a04-4f50-b310-26ac36c83aa7","Type":"ContainerDied","Data":"4f9a6dee0b37dd3e75483545ded3017a44c6a69274e939487affea8bc74ebde2"} Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.086245 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f9a6dee0b37dd3e75483545ded3017a44c6a69274e939487affea8bc74ebde2" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.092979 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdee3087-3a04-4f50-b310-26ac36c83aa7-kube-api-access-l6shn" (OuterVolumeSpecName: "kube-api-access-l6shn") pod "fdee3087-3a04-4f50-b310-26ac36c83aa7" (UID: "fdee3087-3a04-4f50-b310-26ac36c83aa7"). InnerVolumeSpecName "kube-api-access-l6shn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.184749 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6shn\" (UniqueName: \"kubernetes.io/projected/fdee3087-3a04-4f50-b310-26ac36c83aa7-kube-api-access-l6shn\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.417026 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.491335 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-operator-scripts\") pod \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\" (UID: \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\") " Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.491837 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9f4x\" (UniqueName: \"kubernetes.io/projected/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-kube-api-access-v9f4x\") pod \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\" (UID: \"790b5bdb-2f09-4266-9b96-3d05f5c4eff6\") " Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.491887 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "790b5bdb-2f09-4266-9b96-3d05f5c4eff6" (UID: "790b5bdb-2f09-4266-9b96-3d05f5c4eff6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.492476 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.496245 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-kube-api-access-v9f4x" (OuterVolumeSpecName: "kube-api-access-v9f4x") pod "790b5bdb-2f09-4266-9b96-3d05f5c4eff6" (UID: "790b5bdb-2f09-4266-9b96-3d05f5c4eff6"). InnerVolumeSpecName "kube-api-access-v9f4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:28 crc kubenswrapper[5007]: I0218 19:55:28.593657 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9f4x\" (UniqueName: \"kubernetes.io/projected/790b5bdb-2f09-4266-9b96-3d05f5c4eff6-kube-api-access-v9f4x\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:29 crc kubenswrapper[5007]: I0218 19:55:29.098751 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerStarted","Data":"b288faded383d97c0d9b051c7d66105e258cadbb359259f9c871b9144a670cbc"} Feb 18 19:55:29 crc kubenswrapper[5007]: I0218 19:55:29.099910 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6q4hv" event={"ID":"790b5bdb-2f09-4266-9b96-3d05f5c4eff6","Type":"ContainerDied","Data":"bd42c522c3e1d6c8ec71bf5f5aa4480c290e4d9fae10910d19220fe293c72973"} Feb 18 19:55:29 crc kubenswrapper[5007]: I0218 19:55:29.099951 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd42c522c3e1d6c8ec71bf5f5aa4480c290e4d9fae10910d19220fe293c72973" Feb 18 19:55:29 crc kubenswrapper[5007]: I0218 19:55:29.100016 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6q4hv" Feb 18 19:55:29 crc kubenswrapper[5007]: I0218 19:55:29.134681 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=10.229479031 podStartE2EDuration="49.134666501s" podCreationTimestamp="2026-02-18 19:54:40 +0000 UTC" firstStartedPulling="2026-02-18 19:54:49.1070715 +0000 UTC m=+973.889191024" lastFinishedPulling="2026-02-18 19:55:28.01225897 +0000 UTC m=+1012.794378494" observedRunningTime="2026-02-18 19:55:29.127786156 +0000 UTC m=+1013.909905700" watchObservedRunningTime="2026-02-18 19:55:29.134666501 +0000 UTC m=+1013.916786025" Feb 18 19:55:30 crc kubenswrapper[5007]: I0218 19:55:30.109988 5007 generic.go:334] "Generic (PLEG): container finished" podID="62901f6c-6a4b-4bdb-8913-167f40aa0e66" containerID="7fbe6d71a84db3b26fb321e61a00dfd53cc7363d77b6edcaa2480a1a321b4df0" exitCode=0 Feb 18 19:55:30 crc kubenswrapper[5007]: I0218 19:55:30.110070 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ph9tp" event={"ID":"62901f6c-6a4b-4bdb-8913-167f40aa0e66","Type":"ContainerDied","Data":"7fbe6d71a84db3b26fb321e61a00dfd53cc7363d77b6edcaa2480a1a321b4df0"} Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.160075 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6q4hv"] Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.171459 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6q4hv"] Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.472887 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.583260 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.749668 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/62901f6c-6a4b-4bdb-8913-167f40aa0e66-etc-swift\") pod \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.749832 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-swiftconf\") pod \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.751326 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62901f6c-6a4b-4bdb-8913-167f40aa0e66-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "62901f6c-6a4b-4bdb-8913-167f40aa0e66" (UID: "62901f6c-6a4b-4bdb-8913-167f40aa0e66"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.751383 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5p8\" (UniqueName: \"kubernetes.io/projected/62901f6c-6a4b-4bdb-8913-167f40aa0e66-kube-api-access-qg5p8\") pod \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.751542 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-combined-ca-bundle\") pod \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.751989 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-scripts\") pod \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.752100 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-dispersionconf\") pod \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.752304 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-ring-data-devices\") pod \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\" (UID: \"62901f6c-6a4b-4bdb-8913-167f40aa0e66\") " Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.752961 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "62901f6c-6a4b-4bdb-8913-167f40aa0e66" (UID: "62901f6c-6a4b-4bdb-8913-167f40aa0e66"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.753186 5007 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.753235 5007 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/62901f6c-6a4b-4bdb-8913-167f40aa0e66-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.763268 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62901f6c-6a4b-4bdb-8913-167f40aa0e66-kube-api-access-qg5p8" (OuterVolumeSpecName: "kube-api-access-qg5p8") pod "62901f6c-6a4b-4bdb-8913-167f40aa0e66" (UID: "62901f6c-6a4b-4bdb-8913-167f40aa0e66"). InnerVolumeSpecName "kube-api-access-qg5p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.773658 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "62901f6c-6a4b-4bdb-8913-167f40aa0e66" (UID: "62901f6c-6a4b-4bdb-8913-167f40aa0e66"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.790035 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62901f6c-6a4b-4bdb-8913-167f40aa0e66" (UID: "62901f6c-6a4b-4bdb-8913-167f40aa0e66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.792142 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "62901f6c-6a4b-4bdb-8913-167f40aa0e66" (UID: "62901f6c-6a4b-4bdb-8913-167f40aa0e66"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.796555 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-scripts" (OuterVolumeSpecName: "scripts") pod "62901f6c-6a4b-4bdb-8913-167f40aa0e66" (UID: "62901f6c-6a4b-4bdb-8913-167f40aa0e66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.854234 5007 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.854290 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5p8\" (UniqueName: \"kubernetes.io/projected/62901f6c-6a4b-4bdb-8913-167f40aa0e66-kube-api-access-qg5p8\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.854302 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.854311 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62901f6c-6a4b-4bdb-8913-167f40aa0e66-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.854319 5007 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/62901f6c-6a4b-4bdb-8913-167f40aa0e66-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[5007]: I0218 19:55:31.921574 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790b5bdb-2f09-4266-9b96-3d05f5c4eff6" path="/var/lib/kubelet/pods/790b5bdb-2f09-4266-9b96-3d05f5c4eff6/volumes" Feb 18 19:55:32 crc kubenswrapper[5007]: I0218 19:55:32.157214 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ph9tp" event={"ID":"62901f6c-6a4b-4bdb-8913-167f40aa0e66","Type":"ContainerDied","Data":"edcca6b30042e7ce32b06ff79b44e9c50662077a0ab94e655c06663c9ec09761"} Feb 18 19:55:32 crc kubenswrapper[5007]: I0218 19:55:32.157270 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edcca6b30042e7ce32b06ff79b44e9c50662077a0ab94e655c06663c9ec09761" Feb 18 19:55:32 crc kubenswrapper[5007]: I0218 19:55:32.157348 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ph9tp" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.169738 5007 generic.go:334] "Generic (PLEG): container finished" podID="9c7639d7-1f64-49c0-981d-c6231836c6d4" containerID="3b9b19856216126457701dad5a9b80a4771ad63b0183a1dd02f7bcfc185807c2" exitCode=0 Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.169802 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"9c7639d7-1f64-49c0-981d-c6231836c6d4","Type":"ContainerDied","Data":"3b9b19856216126457701dad5a9b80a4771ad63b0183a1dd02f7bcfc185807c2"} Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.174040 5007 generic.go:334] "Generic (PLEG): container finished" podID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerID="d13bd835c7a75e0fd047866199f573f5ab6e93a40365b1ee95241dd3f9c41398" exitCode=0 Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.174140 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e34afaa-d033-4975-9f1f-e81ab6b743a9","Type":"ContainerDied","Data":"d13bd835c7a75e0fd047866199f573f5ab6e93a40365b1ee95241dd3f9c41398"} Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.176691 5007 generic.go:334] "Generic (PLEG): container finished" podID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerID="f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247" exitCode=0 Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.176744 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48869d89-959c-4f88-b8cb-1eb83e7fce30","Type":"ContainerDied","Data":"f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247"} Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.307473 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7jbdh" podUID="42758d74-6ff7-4d38-960f-41e387147352" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:55:33 crc kubenswrapper[5007]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:55:33 crc kubenswrapper[5007]: > Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.308958 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.331395 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6tjjw" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.417863 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9m4t7"] Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418247 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418258 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418271 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08430e05-3044-4b57-bbda-98dbf36842eb" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418277 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="08430e05-3044-4b57-bbda-98dbf36842eb" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418287 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790b5bdb-2f09-4266-9b96-3d05f5c4eff6" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418294 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="790b5bdb-2f09-4266-9b96-3d05f5c4eff6" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418307 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0038747-eb9f-4f51-b252-52d9b594fe41" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418312 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0038747-eb9f-4f51-b252-52d9b594fe41" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418326 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdee3087-3a04-4f50-b310-26ac36c83aa7" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418331 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdee3087-3a04-4f50-b310-26ac36c83aa7" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418339 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804d707b-d8ff-4ae9-883d-2f89e84bc9bd" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418345 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="804d707b-d8ff-4ae9-883d-2f89e84bc9bd" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418357 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53910845-3e44-4200-8277-3e00ead09fc8" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418362 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="53910845-3e44-4200-8277-3e00ead09fc8" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418375 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15bfff7-0338-47e8-bdd1-4ab3241829b0" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418381 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15bfff7-0338-47e8-bdd1-4ab3241829b0" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418390 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d690d55d-b5a9-4066-ab9c-c8ddb23288b8" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418396 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="d690d55d-b5a9-4066-ab9c-c8ddb23288b8" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: E0218 19:55:33.418408 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62901f6c-6a4b-4bdb-8913-167f40aa0e66" containerName="swift-ring-rebalance" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418414 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="62901f6c-6a4b-4bdb-8913-167f40aa0e66" containerName="swift-ring-rebalance" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418596 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15bfff7-0338-47e8-bdd1-4ab3241829b0" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418606 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="08430e05-3044-4b57-bbda-98dbf36842eb" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418614 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="804d707b-d8ff-4ae9-883d-2f89e84bc9bd" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418621 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0038747-eb9f-4f51-b252-52d9b594fe41" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418633 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="53910845-3e44-4200-8277-3e00ead09fc8" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418641 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="d690d55d-b5a9-4066-ab9c-c8ddb23288b8" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418655 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697" containerName="mariadb-database-create" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418662 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="62901f6c-6a4b-4bdb-8913-167f40aa0e66" containerName="swift-ring-rebalance" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418670 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdee3087-3a04-4f50-b310-26ac36c83aa7" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.418678 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="790b5bdb-2f09-4266-9b96-3d05f5c4eff6" containerName="mariadb-account-create-update" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.419246 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.422649 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p65fx" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.425002 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.444950 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9m4t7"] Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.491226 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-config-data\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.491276 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-combined-ca-bundle\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.491379 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-db-sync-config-data\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.491502 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2tmh\" (UniqueName: \"kubernetes.io/projected/e54a67f1-3156-43d6-b428-62e305e3bc7b-kube-api-access-m2tmh\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.576488 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7jbdh-config-w8kjb"] Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.577503 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.584260 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.592622 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2tmh\" (UniqueName: \"kubernetes.io/projected/e54a67f1-3156-43d6-b428-62e305e3bc7b-kube-api-access-m2tmh\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.592725 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-config-data\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.592750 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-combined-ca-bundle\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.592820 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-db-sync-config-data\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.596373 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-db-sync-config-data\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.596872 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-combined-ca-bundle\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.598877 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-config-data\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.600857 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7jbdh-config-w8kjb"] Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.627822 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2tmh\" (UniqueName: \"kubernetes.io/projected/e54a67f1-3156-43d6-b428-62e305e3bc7b-kube-api-access-m2tmh\") pod \"glance-db-sync-9m4t7\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.705019 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snts8\" (UniqueName: \"kubernetes.io/projected/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-kube-api-access-snts8\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.709658 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-additional-scripts\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.709897 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run-ovn\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.711104 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.711255 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-scripts\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.711375 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-log-ovn\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813040 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snts8\" (UniqueName: \"kubernetes.io/projected/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-kube-api-access-snts8\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813138 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-additional-scripts\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813172 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run-ovn\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813215 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813241 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-scripts\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813275 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-log-ovn\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813607 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-log-ovn\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813664 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run-ovn\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813698 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.813936 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-additional-scripts\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.815502 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-scripts\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.817774 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9m4t7" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.836529 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snts8\" (UniqueName: \"kubernetes.io/projected/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-kube-api-access-snts8\") pod \"ovn-controller-7jbdh-config-w8kjb\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:33 crc kubenswrapper[5007]: I0218 19:55:33.893499 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.204206 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e34afaa-d033-4975-9f1f-e81ab6b743a9","Type":"ContainerStarted","Data":"a0907fd6b319d397ea4996e29f9b606ce47dcc2e2611116a8afff4bb17779685"} Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.204964 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.207248 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48869d89-959c-4f88-b8cb-1eb83e7fce30","Type":"ContainerStarted","Data":"ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc"} Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.208068 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.210395 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"9c7639d7-1f64-49c0-981d-c6231836c6d4","Type":"ContainerStarted","Data":"5780c6d9bb41ad66d48fd01a3f8fab2024a2fe5c296326bab60943f3c8345c2b"} Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.210800 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.227129 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.907047364 podStartE2EDuration="1m1.227115988s" podCreationTimestamp="2026-02-18 19:54:33 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.466814081 +0000 UTC m=+973.248933605" lastFinishedPulling="2026-02-18 19:54:56.786882705 +0000 UTC m=+981.569002229" observedRunningTime="2026-02-18 19:55:34.22333985 +0000 UTC m=+1019.005459374" watchObservedRunningTime="2026-02-18 19:55:34.227115988 +0000 UTC m=+1019.009235512" Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.248687 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.875662414 podStartE2EDuration="1m1.248670452s" podCreationTimestamp="2026-02-18 19:54:33 +0000 UTC" firstStartedPulling="2026-02-18 19:54:48.412896229 +0000 UTC m=+973.195015743" lastFinishedPulling="2026-02-18 19:54:56.785904257 +0000 UTC m=+981.568023781" observedRunningTime="2026-02-18 19:55:34.242005092 +0000 UTC m=+1019.024124606" watchObservedRunningTime="2026-02-18 19:55:34.248670452 +0000 UTC m=+1019.030789976" Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.283531 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=53.538256532 podStartE2EDuration="1m2.283513074s" podCreationTimestamp="2026-02-18 19:54:32 +0000 UTC" firstStartedPulling="2026-02-18 19:54:47.629407755 +0000 UTC m=+972.411527309" lastFinishedPulling="2026-02-18 19:54:56.374664317 +0000 UTC m=+981.156783851" observedRunningTime="2026-02-18 19:55:34.277453981 +0000 UTC m=+1019.059573515" watchObservedRunningTime="2026-02-18 19:55:34.283513074 +0000 UTC m=+1019.065632598" Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.383047 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9m4t7"] Feb 18 19:55:34 crc kubenswrapper[5007]: I0218 19:55:34.442510 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7jbdh-config-w8kjb"] Feb 18 19:55:35 crc kubenswrapper[5007]: I0218 19:55:35.219413 5007 generic.go:334] "Generic (PLEG): container finished" podID="05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" containerID="8c7e5acd990d447d33c0f0612664ab1c135e2e2dbbfe7bef1f1b54619e1fbdbc" exitCode=0 Feb 18 19:55:35 crc kubenswrapper[5007]: I0218 19:55:35.219479 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7jbdh-config-w8kjb" event={"ID":"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12","Type":"ContainerDied","Data":"8c7e5acd990d447d33c0f0612664ab1c135e2e2dbbfe7bef1f1b54619e1fbdbc"} Feb 18 19:55:35 crc kubenswrapper[5007]: I0218 19:55:35.219986 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7jbdh-config-w8kjb" event={"ID":"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12","Type":"ContainerStarted","Data":"7bf570cadaf1a22a55511ae0cb403071dd1a594b166f34589fa7a273aea2e8a1"} Feb 18 19:55:35 crc kubenswrapper[5007]: I0218 19:55:35.223511 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9m4t7" event={"ID":"e54a67f1-3156-43d6-b428-62e305e3bc7b","Type":"ContainerStarted","Data":"c0e6ec7cb8285062667a03c2e8f0f51762a98266e0e1783e895772cc89a831f5"} Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.111963 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bc6bd"] Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.113864 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.117037 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.143040 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bc6bd"] Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.161660 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-operator-scripts\") pod \"root-account-create-update-bc6bd\" (UID: \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\") " pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.161793 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb4cb\" (UniqueName: \"kubernetes.io/projected/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-kube-api-access-cb4cb\") pod \"root-account-create-update-bc6bd\" (UID: \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\") " pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.262863 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb4cb\" (UniqueName: \"kubernetes.io/projected/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-kube-api-access-cb4cb\") pod \"root-account-create-update-bc6bd\" (UID: \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\") " pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.263184 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-operator-scripts\") pod \"root-account-create-update-bc6bd\" (UID: \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\") " pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.263969 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-operator-scripts\") pod \"root-account-create-update-bc6bd\" (UID: \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\") " pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.303596 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb4cb\" (UniqueName: \"kubernetes.io/projected/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-kube-api-access-cb4cb\") pod \"root-account-create-update-bc6bd\" (UID: \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\") " pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.441399 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.608420 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.672930 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snts8\" (UniqueName: \"kubernetes.io/projected/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-kube-api-access-snts8\") pod \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.673371 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-log-ovn\") pod \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.673406 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-scripts\") pod \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.673455 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-additional-scripts\") pod \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.673520 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run-ovn\") pod \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.673582 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run\") pod \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\" (UID: \"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12\") " Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.675027 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run" (OuterVolumeSpecName: "var-run") pod "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" (UID: "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.675188 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" (UID: "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.676357 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-scripts" (OuterVolumeSpecName: "scripts") pod "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" (UID: "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.677245 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" (UID: "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.677279 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" (UID: "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.686712 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-kube-api-access-snts8" (OuterVolumeSpecName: "kube-api-access-snts8") pod "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" (UID: "05e059b1-6d6c-4b58-ac64-f7a5e57ccc12"). InnerVolumeSpecName "kube-api-access-snts8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.776771 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snts8\" (UniqueName: \"kubernetes.io/projected/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-kube-api-access-snts8\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.776800 5007 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.776809 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.776817 5007 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.776827 5007 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:36 crc kubenswrapper[5007]: I0218 19:55:36.776836 5007 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.012159 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bc6bd"] Feb 18 19:55:37 crc kubenswrapper[5007]: W0218 19:55:37.013284 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bfbfda6_47bc_4ecf_bdb0_8fc6ad90237b.slice/crio-39df53bb32bf867501f4d93cc9bdd6efead2fb5f4e87455ade56cef8b63e6b18 WatchSource:0}: Error finding container 39df53bb32bf867501f4d93cc9bdd6efead2fb5f4e87455ade56cef8b63e6b18: Status 404 returned error can't find the container with id 39df53bb32bf867501f4d93cc9bdd6efead2fb5f4e87455ade56cef8b63e6b18 Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.242280 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7jbdh-config-w8kjb" event={"ID":"05e059b1-6d6c-4b58-ac64-f7a5e57ccc12","Type":"ContainerDied","Data":"7bf570cadaf1a22a55511ae0cb403071dd1a594b166f34589fa7a273aea2e8a1"} Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.242913 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf570cadaf1a22a55511ae0cb403071dd1a594b166f34589fa7a273aea2e8a1" Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.242513 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7jbdh-config-w8kjb" Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.244785 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bc6bd" event={"ID":"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b","Type":"ContainerStarted","Data":"934b25069cbab84ed0f4fc00310f2a2d12ce7d0b720b86b2e9f2b291484d56f4"} Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.244993 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bc6bd" event={"ID":"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b","Type":"ContainerStarted","Data":"39df53bb32bf867501f4d93cc9bdd6efead2fb5f4e87455ade56cef8b63e6b18"} Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.271156 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-bc6bd" podStartSLOduration=1.271138091 podStartE2EDuration="1.271138091s" podCreationTimestamp="2026-02-18 19:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:37.266921541 +0000 UTC m=+1022.049041055" watchObservedRunningTime="2026-02-18 19:55:37.271138091 +0000 UTC m=+1022.053257625" Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.710708 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7jbdh-config-w8kjb"] Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.716813 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7jbdh-config-w8kjb"] Feb 18 19:55:37 crc kubenswrapper[5007]: I0218 19:55:37.921572 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" path="/var/lib/kubelet/pods/05e059b1-6d6c-4b58-ac64-f7a5e57ccc12/volumes" Feb 18 19:55:38 crc kubenswrapper[5007]: I0218 19:55:38.263693 5007 generic.go:334] "Generic (PLEG): container finished" podID="8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b" containerID="934b25069cbab84ed0f4fc00310f2a2d12ce7d0b720b86b2e9f2b291484d56f4" exitCode=0 Feb 18 19:55:38 crc kubenswrapper[5007]: I0218 19:55:38.263749 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bc6bd" event={"ID":"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b","Type":"ContainerDied","Data":"934b25069cbab84ed0f4fc00310f2a2d12ce7d0b720b86b2e9f2b291484d56f4"} Feb 18 19:55:38 crc kubenswrapper[5007]: I0218 19:55:38.281112 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7jbdh" Feb 18 19:55:39 crc kubenswrapper[5007]: I0218 19:55:39.633511 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:39 crc kubenswrapper[5007]: I0218 19:55:39.735873 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb4cb\" (UniqueName: \"kubernetes.io/projected/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-kube-api-access-cb4cb\") pod \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\" (UID: \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\") " Feb 18 19:55:39 crc kubenswrapper[5007]: I0218 19:55:39.736080 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-operator-scripts\") pod \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\" (UID: \"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b\") " Feb 18 19:55:39 crc kubenswrapper[5007]: I0218 19:55:39.737049 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b" (UID: "8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:39 crc kubenswrapper[5007]: I0218 19:55:39.740973 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-kube-api-access-cb4cb" (OuterVolumeSpecName: "kube-api-access-cb4cb") pod "8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b" (UID: "8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b"). InnerVolumeSpecName "kube-api-access-cb4cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:39 crc kubenswrapper[5007]: I0218 19:55:39.838978 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb4cb\" (UniqueName: \"kubernetes.io/projected/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-kube-api-access-cb4cb\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:39 crc kubenswrapper[5007]: I0218 19:55:39.839336 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:40 crc kubenswrapper[5007]: I0218 19:55:40.279999 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bc6bd" event={"ID":"8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b","Type":"ContainerDied","Data":"39df53bb32bf867501f4d93cc9bdd6efead2fb5f4e87455ade56cef8b63e6b18"} Feb 18 19:55:40 crc kubenswrapper[5007]: I0218 19:55:40.280675 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39df53bb32bf867501f4d93cc9bdd6efead2fb5f4e87455ade56cef8b63e6b18" Feb 18 19:55:40 crc kubenswrapper[5007]: I0218 19:55:40.280648 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bc6bd" Feb 18 19:55:41 crc kubenswrapper[5007]: I0218 19:55:41.473683 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:41 crc kubenswrapper[5007]: I0218 19:55:41.476712 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:42 crc kubenswrapper[5007]: I0218 19:55:42.298651 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:43 crc kubenswrapper[5007]: I0218 19:55:43.200714 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:43 crc kubenswrapper[5007]: I0218 19:55:43.208508 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d7b23e47-cf61-4c51-81b1-f9f7357c12b8-etc-swift\") pod \"swift-storage-0\" (UID: \"d7b23e47-cf61-4c51-81b1-f9f7357c12b8\") " pod="openstack/swift-storage-0" Feb 18 19:55:43 crc kubenswrapper[5007]: I0218 19:55:43.355538 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:55:44 crc kubenswrapper[5007]: I0218 19:55:44.087604 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:55:44 crc kubenswrapper[5007]: I0218 19:55:44.311614 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="prometheus" containerID="cri-o://ee8de0c4d928110b7d1fa4f43b45f63d435e7d9563a3775b6de79857e773fb00" gracePeriod=600 Feb 18 19:55:44 crc kubenswrapper[5007]: I0218 19:55:44.311692 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="thanos-sidecar" containerID="cri-o://b288faded383d97c0d9b051c7d66105e258cadbb359259f9c871b9144a670cbc" gracePeriod=600 Feb 18 19:55:44 crc kubenswrapper[5007]: I0218 19:55:44.311709 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="config-reloader" containerID="cri-o://4efb07f81baea94c9493e01642b0255485068b2494d607f6ace1a1eef0e4753f" gracePeriod=600 Feb 18 19:55:44 crc kubenswrapper[5007]: I0218 19:55:44.325324 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="9c7639d7-1f64-49c0-981d-c6231836c6d4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 19:55:44 crc kubenswrapper[5007]: I0218 19:55:44.630920 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 18 19:55:44 crc kubenswrapper[5007]: I0218 19:55:44.952353 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 18 19:55:45 crc kubenswrapper[5007]: I0218 19:55:45.322922 5007 generic.go:334] "Generic (PLEG): container finished" podID="d5c55070-9969-4f85-be20-230c847f2722" containerID="b288faded383d97c0d9b051c7d66105e258cadbb359259f9c871b9144a670cbc" exitCode=0 Feb 18 19:55:45 crc kubenswrapper[5007]: I0218 19:55:45.322954 5007 generic.go:334] "Generic (PLEG): container finished" podID="d5c55070-9969-4f85-be20-230c847f2722" containerID="4efb07f81baea94c9493e01642b0255485068b2494d607f6ace1a1eef0e4753f" exitCode=0 Feb 18 19:55:45 crc kubenswrapper[5007]: I0218 19:55:45.322980 5007 generic.go:334] "Generic (PLEG): container finished" podID="d5c55070-9969-4f85-be20-230c847f2722" containerID="ee8de0c4d928110b7d1fa4f43b45f63d435e7d9563a3775b6de79857e773fb00" exitCode=0 Feb 18 19:55:45 crc kubenswrapper[5007]: I0218 19:55:45.323019 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerDied","Data":"b288faded383d97c0d9b051c7d66105e258cadbb359259f9c871b9144a670cbc"} Feb 18 19:55:45 crc kubenswrapper[5007]: I0218 19:55:45.323086 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerDied","Data":"4efb07f81baea94c9493e01642b0255485068b2494d607f6ace1a1eef0e4753f"} Feb 18 19:55:45 crc kubenswrapper[5007]: I0218 19:55:45.323104 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerDied","Data":"ee8de0c4d928110b7d1fa4f43b45f63d435e7d9563a3775b6de79857e773fb00"} Feb 18 19:55:46 crc kubenswrapper[5007]: I0218 19:55:46.473007 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.047572 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100001 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-tls-assets\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100060 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-0\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100137 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-1\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100185 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-thanos-prometheus-http-client-file\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100223 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d5c55070-9969-4f85-be20-230c847f2722-config-out\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100268 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-config\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100308 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-web-config\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100332 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5msm6\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-kube-api-access-5msm6\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100374 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-2\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.100525 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"d5c55070-9969-4f85-be20-230c847f2722\" (UID: \"d5c55070-9969-4f85-be20-230c847f2722\") " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.101745 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.101786 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.106793 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-config" (OuterVolumeSpecName: "config") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.106783 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-kube-api-access-5msm6" (OuterVolumeSpecName: "kube-api-access-5msm6") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "kube-api-access-5msm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.106949 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.106964 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.108596 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c55070-9969-4f85-be20-230c847f2722-config-out" (OuterVolumeSpecName: "config-out") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.108662 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.127210 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "pvc-6186620d-f34e-4325-8bee-d656de7291fd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.136035 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-web-config" (OuterVolumeSpecName: "web-config") pod "d5c55070-9969-4f85-be20-230c847f2722" (UID: "d5c55070-9969-4f85-be20-230c847f2722"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202392 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") on node \"crc\" " Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202421 5007 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202442 5007 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202453 5007 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202463 5007 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202474 5007 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d5c55070-9969-4f85-be20-230c847f2722-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202482 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202491 5007 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d5c55070-9969-4f85-be20-230c847f2722-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202499 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5msm6\" (UniqueName: \"kubernetes.io/projected/d5c55070-9969-4f85-be20-230c847f2722-kube-api-access-5msm6\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.202507 5007 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d5c55070-9969-4f85-be20-230c847f2722-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.225765 5007 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.225901 5007 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6186620d-f34e-4325-8bee-d656de7291fd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd") on node "crc" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.303809 5007 reconciler_common.go:293] "Volume detached for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.361059 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d5c55070-9969-4f85-be20-230c847f2722","Type":"ContainerDied","Data":"a82ac177195d530d98bc7c3ac50dc3bf416df61fcfa58a4cbdae4818e8c835f8"} Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.361130 5007 scope.go:117] "RemoveContainer" containerID="b288faded383d97c0d9b051c7d66105e258cadbb359259f9c871b9144a670cbc" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.361157 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.390385 5007 scope.go:117] "RemoveContainer" containerID="4efb07f81baea94c9493e01642b0255485068b2494d607f6ace1a1eef0e4753f" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.395388 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.401503 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.409755 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.420094 5007 scope.go:117] "RemoveContainer" containerID="ee8de0c4d928110b7d1fa4f43b45f63d435e7d9563a3775b6de79857e773fb00" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.436450 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:55:49 crc kubenswrapper[5007]: E0218 19:55:49.436813 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b" containerName="mariadb-account-create-update" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.436830 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b" containerName="mariadb-account-create-update" Feb 18 19:55:49 crc kubenswrapper[5007]: E0218 19:55:49.436842 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="prometheus" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.436848 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="prometheus" Feb 18 19:55:49 crc kubenswrapper[5007]: E0218 19:55:49.436866 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" containerName="ovn-config" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.436872 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" containerName="ovn-config" Feb 18 19:55:49 crc kubenswrapper[5007]: E0218 19:55:49.436883 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="init-config-reloader" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.436891 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="init-config-reloader" Feb 18 19:55:49 crc kubenswrapper[5007]: E0218 19:55:49.436900 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="config-reloader" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.436906 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="config-reloader" Feb 18 19:55:49 crc kubenswrapper[5007]: E0218 19:55:49.436914 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="thanos-sidecar" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.436920 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="thanos-sidecar" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.437070 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="thanos-sidecar" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.437084 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="prometheus" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.437094 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e059b1-6d6c-4b58-ac64-f7a5e57ccc12" containerName="ovn-config" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.437104 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b" containerName="mariadb-account-create-update" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.437115 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c55070-9969-4f85-be20-230c847f2722" containerName="config-reloader" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.438700 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.441352 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4q7sz" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.441492 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.448766 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.449501 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.449818 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.450069 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.450569 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.450591 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.456230 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.461095 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.464566 5007 scope.go:117] "RemoveContainer" containerID="29a3e7e4dce02b197c67122d6a2f330abb32ef62b89cafcf0832560824364f47" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.506444 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.506493 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhgp\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-kube-api-access-qqhgp\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.506572 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.506606 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.506846 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.506904 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.506943 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.507036 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.507085 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.507149 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.507230 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.507257 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.507287 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.609292 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.610591 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.610727 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.610792 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.610827 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.610888 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.611815 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.611871 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.611913 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.611930 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.611949 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.611981 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.612001 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhgp\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-kube-api-access-qqhgp\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.612060 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.612708 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.612955 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.615480 5007 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.615527 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1309d09147a960927b84a66d0aaca53d5d9063fe1f3c754674268f7a120fbef0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.618366 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.620101 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.620328 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.620527 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.620653 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.620841 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.621952 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.623081 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.633204 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhgp\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-kube-api-access-qqhgp\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.652056 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.758178 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:55:49 crc kubenswrapper[5007]: I0218 19:55:49.920565 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c55070-9969-4f85-be20-230c847f2722" path="/var/lib/kubelet/pods/d5c55070-9969-4f85-be20-230c847f2722/volumes" Feb 18 19:55:50 crc kubenswrapper[5007]: I0218 19:55:50.256150 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:55:50 crc kubenswrapper[5007]: W0218 19:55:50.259621 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad6bbbdf_51d7_4968_9e80_eed841e76e5c.slice/crio-6c78e36d92a09bb404e41fccda35e7f0b09a1c6166fc74ac719f06ab209f83ee WatchSource:0}: Error finding container 6c78e36d92a09bb404e41fccda35e7f0b09a1c6166fc74ac719f06ab209f83ee: Status 404 returned error can't find the container with id 6c78e36d92a09bb404e41fccda35e7f0b09a1c6166fc74ac719f06ab209f83ee Feb 18 19:55:50 crc kubenswrapper[5007]: I0218 19:55:50.373925 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9m4t7" event={"ID":"e54a67f1-3156-43d6-b428-62e305e3bc7b","Type":"ContainerStarted","Data":"28efe183548b02d593e64047fc3f898e736d3878c42a7d78e5ee141a8d663941"} Feb 18 19:55:50 crc kubenswrapper[5007]: I0218 19:55:50.375953 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerStarted","Data":"6c78e36d92a09bb404e41fccda35e7f0b09a1c6166fc74ac719f06ab209f83ee"} Feb 18 19:55:50 crc kubenswrapper[5007]: I0218 19:55:50.377737 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"ce3e71c308b9925a251d87f51974ac879a5b037543e0760acd25fdaa4a1a458f"} Feb 18 19:55:50 crc kubenswrapper[5007]: I0218 19:55:50.377770 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"97846262af68e287d3866c405731295e4a40e32107f219ceb2a214ce13105afa"} Feb 18 19:55:50 crc kubenswrapper[5007]: I0218 19:55:50.401620 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9m4t7" podStartSLOduration=2.969399031 podStartE2EDuration="17.401598033s" podCreationTimestamp="2026-02-18 19:55:33 +0000 UTC" firstStartedPulling="2026-02-18 19:55:34.387580417 +0000 UTC m=+1019.169699941" lastFinishedPulling="2026-02-18 19:55:48.819779419 +0000 UTC m=+1033.601898943" observedRunningTime="2026-02-18 19:55:50.390878478 +0000 UTC m=+1035.172998012" watchObservedRunningTime="2026-02-18 19:55:50.401598033 +0000 UTC m=+1035.183717557" Feb 18 19:55:51 crc kubenswrapper[5007]: I0218 19:55:51.387969 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"0c5747ff64e8b9608465b52fc4d4328b78c8e546d073f3d714972bc00bd1862d"} Feb 18 19:55:51 crc kubenswrapper[5007]: I0218 19:55:51.388309 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"6fb3fede02627844edf8778e44bc3695f6f467e1422352a332c9c4132e29cb6f"} Feb 18 19:55:51 crc kubenswrapper[5007]: I0218 19:55:51.388324 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"b6a4ed70a9d70740a51a5f97206334a5d31014954cee2b060185265ffe22b5d2"} Feb 18 19:55:52 crc kubenswrapper[5007]: I0218 19:55:52.397100 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"d546dd6f861bf3e6690683bd407f6f6614c9e8a0e7abbbdcef10c3c0db50a054"} Feb 18 19:55:53 crc kubenswrapper[5007]: I0218 19:55:53.408541 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"134e8b38f59239e01f87ca5202c840d482148bbef78c0eb2164e72d29a0ce280"} Feb 18 19:55:53 crc kubenswrapper[5007]: I0218 19:55:53.408867 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"0d59bc9ede7ad23e0c0353f61695c0b27e677213c8b4657913b5ac514fe91180"} Feb 18 19:55:53 crc kubenswrapper[5007]: I0218 19:55:53.408879 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"39ef6659f8cdaaae4247e91455e28da0ef4f68234d641550de0d4340d0e36969"} Feb 18 19:55:54 crc kubenswrapper[5007]: I0218 19:55:54.325323 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:55:54 crc kubenswrapper[5007]: I0218 19:55:54.427301 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerStarted","Data":"122d1d7c28bd2f43f71548ba14d009c3a916998e5da057bef7e9685c47871213"} Feb 18 19:55:54 crc kubenswrapper[5007]: I0218 19:55:54.434941 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"a1e77e3853cafaa84e5f3e6ccd3bf6c97b65b1e5cb89add240a9fb9086b8f43d"} Feb 18 19:55:54 crc kubenswrapper[5007]: I0218 19:55:54.434978 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"0cde5e483d7d5356e276361d951e2413e6287f2954eaa17d681946dfa63a2ad8"} Feb 18 19:55:54 crc kubenswrapper[5007]: I0218 19:55:54.434988 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"c31948e3cdb5ba4d8a18430bfd5e4f990495d877365c7b01d456ec7c15587a35"} Feb 18 19:55:54 crc kubenswrapper[5007]: I0218 19:55:54.435015 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"b4f250fe6065c73052de372533620a86e5884cef69c89cacac7ad33530593036"} Feb 18 19:55:54 crc kubenswrapper[5007]: I0218 19:55:54.633148 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:55:54 crc kubenswrapper[5007]: I0218 19:55:54.952718 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:55:55 crc kubenswrapper[5007]: I0218 19:55:55.458764 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"bc5d5b33eac56eb015f2e73629743728e97a22024bd8c17583c1493a57077d02"} Feb 18 19:55:55 crc kubenswrapper[5007]: I0218 19:55:55.458831 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"bba5869a583bffbff9aae72ac98f6e76cffcd7d0bdcee6c1456fc99212dd2c72"} Feb 18 19:55:55 crc kubenswrapper[5007]: I0218 19:55:55.458855 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d7b23e47-cf61-4c51-81b1-f9f7357c12b8","Type":"ContainerStarted","Data":"3348ff3aa00956795d5b6aea6b46d6c733e16a12c936cbcb6c5d8f54e95abea4"} Feb 18 19:55:55 crc kubenswrapper[5007]: I0218 19:55:55.903704 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.846416918 podStartE2EDuration="45.903687045s" podCreationTimestamp="2026-02-18 19:55:10 +0000 UTC" firstStartedPulling="2026-02-18 19:55:49.418756836 +0000 UTC m=+1034.200876360" lastFinishedPulling="2026-02-18 19:55:53.476026963 +0000 UTC m=+1038.258146487" observedRunningTime="2026-02-18 19:55:55.509881091 +0000 UTC m=+1040.292000615" watchObservedRunningTime="2026-02-18 19:55:55.903687045 +0000 UTC m=+1040.685806569" Feb 18 19:55:55 crc kubenswrapper[5007]: I0218 19:55:55.907674 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75fb45b9c-prmzx"] Feb 18 19:55:55 crc kubenswrapper[5007]: I0218 19:55:55.914036 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:55 crc kubenswrapper[5007]: I0218 19:55:55.918203 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 19:55:55 crc kubenswrapper[5007]: I0218 19:55:55.935373 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75fb45b9c-prmzx"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.042209 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-nb\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.042557 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-sb\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.042655 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-config\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.042691 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmck\" (UniqueName: \"kubernetes.io/projected/5f8c2991-3116-46e0-b302-397291db1589-kube-api-access-qpmck\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.042708 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-svc\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.042742 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-swift-storage-0\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.144226 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpmck\" (UniqueName: \"kubernetes.io/projected/5f8c2991-3116-46e0-b302-397291db1589-kube-api-access-qpmck\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.144271 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-svc\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.144299 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-swift-storage-0\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.144360 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-nb\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.144398 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-sb\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.144622 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-config\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.145336 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-sb\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.145337 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-svc\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.145776 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-config\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.146259 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-swift-storage-0\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.146344 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-nb\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.165135 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpmck\" (UniqueName: \"kubernetes.io/projected/5f8c2991-3116-46e0-b302-397291db1589-kube-api-access-qpmck\") pod \"dnsmasq-dns-75fb45b9c-prmzx\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.172925 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-s7llx"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.180190 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-s7llx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.190993 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-s7llx"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.235004 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.331985 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3afb-account-create-update-ckj4v"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.332984 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.347835 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-operator-scripts\") pod \"cinder-db-create-s7llx\" (UID: \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\") " pod="openstack/cinder-db-create-s7llx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.347969 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvd5p\" (UniqueName: \"kubernetes.io/projected/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-kube-api-access-pvd5p\") pod \"cinder-db-create-s7llx\" (UID: \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\") " pod="openstack/cinder-db-create-s7llx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.348128 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.371278 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3afb-account-create-update-ckj4v"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.449450 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-operator-scripts\") pod \"cinder-db-create-s7llx\" (UID: \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\") " pod="openstack/cinder-db-create-s7llx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.449586 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2pfm\" (UniqueName: \"kubernetes.io/projected/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-kube-api-access-x2pfm\") pod \"cinder-3afb-account-create-update-ckj4v\" (UID: \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\") " pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.449609 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-operator-scripts\") pod \"cinder-3afb-account-create-update-ckj4v\" (UID: \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\") " pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.449628 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvd5p\" (UniqueName: \"kubernetes.io/projected/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-kube-api-access-pvd5p\") pod \"cinder-db-create-s7llx\" (UID: \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\") " pod="openstack/cinder-db-create-s7llx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.451505 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-operator-scripts\") pod \"cinder-db-create-s7llx\" (UID: \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\") " pod="openstack/cinder-db-create-s7llx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.501947 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvd5p\" (UniqueName: \"kubernetes.io/projected/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-kube-api-access-pvd5p\") pod \"cinder-db-create-s7llx\" (UID: \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\") " pod="openstack/cinder-db-create-s7llx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.523774 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rf7bq"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.524017 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-s7llx" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.527855 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rf7bq" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.551515 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2pfm\" (UniqueName: \"kubernetes.io/projected/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-kube-api-access-x2pfm\") pod \"cinder-3afb-account-create-update-ckj4v\" (UID: \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\") " pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.551557 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-operator-scripts\") pod \"cinder-3afb-account-create-update-ckj4v\" (UID: \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\") " pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.552178 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-operator-scripts\") pod \"cinder-3afb-account-create-update-ckj4v\" (UID: \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\") " pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.590333 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2pfm\" (UniqueName: \"kubernetes.io/projected/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-kube-api-access-x2pfm\") pod \"cinder-3afb-account-create-update-ckj4v\" (UID: \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\") " pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.595075 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rf7bq"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.653372 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zj7\" (UniqueName: \"kubernetes.io/projected/1e0510ca-f7a8-437b-9625-f6be614e61cb-kube-api-access-j9zj7\") pod \"barbican-db-create-rf7bq\" (UID: \"1e0510ca-f7a8-437b-9625-f6be614e61cb\") " pod="openstack/barbican-db-create-rf7bq" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.653451 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e0510ca-f7a8-437b-9625-f6be614e61cb-operator-scripts\") pod \"barbican-db-create-rf7bq\" (UID: \"1e0510ca-f7a8-437b-9625-f6be614e61cb\") " pod="openstack/barbican-db-create-rf7bq" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.685535 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xccwz"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.686554 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.706151 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.706182 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-48e7-account-create-update-sczjm"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.706383 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.706511 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sd9ml" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.707363 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.712328 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.723032 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.732070 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xccwz"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.744381 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.754837 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mb65\" (UniqueName: \"kubernetes.io/projected/8f87bd0d-a9a8-4009-a7f3-54429ed28348-kube-api-access-6mb65\") pod \"barbican-48e7-account-create-update-sczjm\" (UID: \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\") " pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.754885 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-config-data\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.754910 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zj7\" (UniqueName: \"kubernetes.io/projected/1e0510ca-f7a8-437b-9625-f6be614e61cb-kube-api-access-j9zj7\") pod \"barbican-db-create-rf7bq\" (UID: \"1e0510ca-f7a8-437b-9625-f6be614e61cb\") " pod="openstack/barbican-db-create-rf7bq" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.754930 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kl9b\" (UniqueName: \"kubernetes.io/projected/c8e4fea3-ff28-4966-964d-31c16c2a5c13-kube-api-access-2kl9b\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.754951 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f87bd0d-a9a8-4009-a7f3-54429ed28348-operator-scripts\") pod \"barbican-48e7-account-create-update-sczjm\" (UID: \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\") " pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.754989 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-combined-ca-bundle\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.755061 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e0510ca-f7a8-437b-9625-f6be614e61cb-operator-scripts\") pod \"barbican-db-create-rf7bq\" (UID: \"1e0510ca-f7a8-437b-9625-f6be614e61cb\") " pod="openstack/barbican-db-create-rf7bq" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.756968 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e0510ca-f7a8-437b-9625-f6be614e61cb-operator-scripts\") pod \"barbican-db-create-rf7bq\" (UID: \"1e0510ca-f7a8-437b-9625-f6be614e61cb\") " pod="openstack/barbican-db-create-rf7bq" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.757808 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-48e7-account-create-update-sczjm"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.798745 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zj7\" (UniqueName: \"kubernetes.io/projected/1e0510ca-f7a8-437b-9625-f6be614e61cb-kube-api-access-j9zj7\") pod \"barbican-db-create-rf7bq\" (UID: \"1e0510ca-f7a8-437b-9625-f6be614e61cb\") " pod="openstack/barbican-db-create-rf7bq" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.850282 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-btd2f"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.851525 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-btd2f" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.855898 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mb65\" (UniqueName: \"kubernetes.io/projected/8f87bd0d-a9a8-4009-a7f3-54429ed28348-kube-api-access-6mb65\") pod \"barbican-48e7-account-create-update-sczjm\" (UID: \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\") " pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.855948 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-config-data\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.855972 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kl9b\" (UniqueName: \"kubernetes.io/projected/c8e4fea3-ff28-4966-964d-31c16c2a5c13-kube-api-access-2kl9b\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.855994 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f87bd0d-a9a8-4009-a7f3-54429ed28348-operator-scripts\") pod \"barbican-48e7-account-create-update-sczjm\" (UID: \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\") " pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.856023 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-combined-ca-bundle\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.860607 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f87bd0d-a9a8-4009-a7f3-54429ed28348-operator-scripts\") pod \"barbican-48e7-account-create-update-sczjm\" (UID: \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\") " pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.860887 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-combined-ca-bundle\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.862620 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-config-data\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.863139 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-btd2f"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.878950 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rf7bq" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.882399 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kl9b\" (UniqueName: \"kubernetes.io/projected/c8e4fea3-ff28-4966-964d-31c16c2a5c13-kube-api-access-2kl9b\") pod \"keystone-db-sync-xccwz\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.884542 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mb65\" (UniqueName: \"kubernetes.io/projected/8f87bd0d-a9a8-4009-a7f3-54429ed28348-kube-api-access-6mb65\") pod \"barbican-48e7-account-create-update-sczjm\" (UID: \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\") " pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.919830 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-905c-account-create-update-nmdg8"] Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.920809 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.933023 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.959169 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923f04c2-f4bb-4ba8-8b06-50e4cda22028-operator-scripts\") pod \"neutron-db-create-btd2f\" (UID: \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\") " pod="openstack/neutron-db-create-btd2f" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.959296 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822282ba-58c6-4182-88e3-1f097b9d912b-operator-scripts\") pod \"neutron-905c-account-create-update-nmdg8\" (UID: \"822282ba-58c6-4182-88e3-1f097b9d912b\") " pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.959453 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftshd\" (UniqueName: \"kubernetes.io/projected/822282ba-58c6-4182-88e3-1f097b9d912b-kube-api-access-ftshd\") pod \"neutron-905c-account-create-update-nmdg8\" (UID: \"822282ba-58c6-4182-88e3-1f097b9d912b\") " pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.959546 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjvp\" (UniqueName: \"kubernetes.io/projected/923f04c2-f4bb-4ba8-8b06-50e4cda22028-kube-api-access-cgjvp\") pod \"neutron-db-create-btd2f\" (UID: \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\") " pod="openstack/neutron-db-create-btd2f" Feb 18 19:55:56 crc kubenswrapper[5007]: I0218 19:55:56.977507 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-905c-account-create-update-nmdg8"] Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.062585 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgjvp\" (UniqueName: \"kubernetes.io/projected/923f04c2-f4bb-4ba8-8b06-50e4cda22028-kube-api-access-cgjvp\") pod \"neutron-db-create-btd2f\" (UID: \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\") " pod="openstack/neutron-db-create-btd2f" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.062678 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923f04c2-f4bb-4ba8-8b06-50e4cda22028-operator-scripts\") pod \"neutron-db-create-btd2f\" (UID: \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\") " pod="openstack/neutron-db-create-btd2f" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.062719 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822282ba-58c6-4182-88e3-1f097b9d912b-operator-scripts\") pod \"neutron-905c-account-create-update-nmdg8\" (UID: \"822282ba-58c6-4182-88e3-1f097b9d912b\") " pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.062768 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftshd\" (UniqueName: \"kubernetes.io/projected/822282ba-58c6-4182-88e3-1f097b9d912b-kube-api-access-ftshd\") pod \"neutron-905c-account-create-update-nmdg8\" (UID: \"822282ba-58c6-4182-88e3-1f097b9d912b\") " pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.063768 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923f04c2-f4bb-4ba8-8b06-50e4cda22028-operator-scripts\") pod \"neutron-db-create-btd2f\" (UID: \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\") " pod="openstack/neutron-db-create-btd2f" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.064666 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822282ba-58c6-4182-88e3-1f097b9d912b-operator-scripts\") pod \"neutron-905c-account-create-update-nmdg8\" (UID: \"822282ba-58c6-4182-88e3-1f097b9d912b\") " pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.078696 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xccwz" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.093151 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgjvp\" (UniqueName: \"kubernetes.io/projected/923f04c2-f4bb-4ba8-8b06-50e4cda22028-kube-api-access-cgjvp\") pod \"neutron-db-create-btd2f\" (UID: \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\") " pod="openstack/neutron-db-create-btd2f" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.111555 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftshd\" (UniqueName: \"kubernetes.io/projected/822282ba-58c6-4182-88e3-1f097b9d912b-kube-api-access-ftshd\") pod \"neutron-905c-account-create-update-nmdg8\" (UID: \"822282ba-58c6-4182-88e3-1f097b9d912b\") " pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.121374 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.159666 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-kwq4c"] Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.160945 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.165733 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-vzw52" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.170909 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.187564 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-kwq4c"] Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.187980 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-btd2f" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.234443 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75fb45b9c-prmzx"] Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.262910 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.267172 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-db-sync-config-data\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.267214 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-combined-ca-bundle\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.267325 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6498\" (UniqueName: \"kubernetes.io/projected/18a90b81-177c-40e6-8b75-be61fe81187d-kube-api-access-n6498\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.267389 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-config-data\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.329177 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3afb-account-create-update-ckj4v"] Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.368569 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6498\" (UniqueName: \"kubernetes.io/projected/18a90b81-177c-40e6-8b75-be61fe81187d-kube-api-access-n6498\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.368649 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-config-data\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.368682 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-db-sync-config-data\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.368699 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-combined-ca-bundle\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.381678 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-config-data\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.387385 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-db-sync-config-data\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.388056 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-combined-ca-bundle\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.402714 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6498\" (UniqueName: \"kubernetes.io/projected/18a90b81-177c-40e6-8b75-be61fe81187d-kube-api-access-n6498\") pod \"watcher-db-sync-kwq4c\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.412861 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-s7llx"] Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.465806 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.492519 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3afb-account-create-update-ckj4v" event={"ID":"eade00fd-e7cb-4213-bc67-a0c2cec0cc13","Type":"ContainerStarted","Data":"c4ed217267f9f6119f82448a5ace850d86f77c71c8a7c143ffa271ae5d751a4c"} Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.493536 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" event={"ID":"5f8c2991-3116-46e0-b302-397291db1589","Type":"ContainerStarted","Data":"a4458f99e4627d4de4ffaf2f7767b5af27be431456a47a8bb54f967605cd4b8a"} Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.494493 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-s7llx" event={"ID":"65bb66ce-376a-4f9a-9202-1d122f4fd3a5","Type":"ContainerStarted","Data":"e2e2dadcf2c5c39c014becd167ee2930ec52f6e1ac9f8dd73e98f2b896f189bd"} Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.670792 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rf7bq"] Feb 18 19:55:57 crc kubenswrapper[5007]: W0218 19:55:57.739192 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0510ca_f7a8_437b_9625_f6be614e61cb.slice/crio-9df5b60fb68ff48f1dd082d87eed6a24b92137682d94a26b0ef41d383499797d WatchSource:0}: Error finding container 9df5b60fb68ff48f1dd082d87eed6a24b92137682d94a26b0ef41d383499797d: Status 404 returned error can't find the container with id 9df5b60fb68ff48f1dd082d87eed6a24b92137682d94a26b0ef41d383499797d Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.843064 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-btd2f"] Feb 18 19:55:57 crc kubenswrapper[5007]: W0218 19:55:57.857362 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod923f04c2_f4bb_4ba8_8b06_50e4cda22028.slice/crio-9bf201057671b333aaa3fceebeaf13e21b3e428e456be82e69f5170e2c802adc WatchSource:0}: Error finding container 9bf201057671b333aaa3fceebeaf13e21b3e428e456be82e69f5170e2c802adc: Status 404 returned error can't find the container with id 9bf201057671b333aaa3fceebeaf13e21b3e428e456be82e69f5170e2c802adc Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.877925 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xccwz"] Feb 18 19:55:57 crc kubenswrapper[5007]: I0218 19:55:57.893112 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-48e7-account-create-update-sczjm"] Feb 18 19:55:57 crc kubenswrapper[5007]: W0218 19:55:57.921192 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8e4fea3_ff28_4966_964d_31c16c2a5c13.slice/crio-fbf474ba2cedc44393aa26de147eaf4e3512ff8dcc928f656017658f85bd29a5 WatchSource:0}: Error finding container fbf474ba2cedc44393aa26de147eaf4e3512ff8dcc928f656017658f85bd29a5: Status 404 returned error can't find the container with id fbf474ba2cedc44393aa26de147eaf4e3512ff8dcc928f656017658f85bd29a5 Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.023506 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-905c-account-create-update-nmdg8"] Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.238354 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-kwq4c"] Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.506488 5007 generic.go:334] "Generic (PLEG): container finished" podID="5f8c2991-3116-46e0-b302-397291db1589" containerID="4097dfd0d03a4ab8a4ac4824c85d001012187b444db723ea079e4fd8bcc1dba9" exitCode=0 Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.507009 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" event={"ID":"5f8c2991-3116-46e0-b302-397291db1589","Type":"ContainerDied","Data":"4097dfd0d03a4ab8a4ac4824c85d001012187b444db723ea079e4fd8bcc1dba9"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.511351 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-kwq4c" event={"ID":"18a90b81-177c-40e6-8b75-be61fe81187d","Type":"ContainerStarted","Data":"0cdc0b1a4372221f695a2a3131cdbffad4ea0b8e6908d4ef0a3d7c9fd60119ef"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.512831 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48e7-account-create-update-sczjm" event={"ID":"8f87bd0d-a9a8-4009-a7f3-54429ed28348","Type":"ContainerStarted","Data":"4ef39e138cf0d8196912720f3b0fb777fdd29750a558037e7cbe4fd632e155ae"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.512863 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48e7-account-create-update-sczjm" event={"ID":"8f87bd0d-a9a8-4009-a7f3-54429ed28348","Type":"ContainerStarted","Data":"cfdcf50fbc473a7ecd6aeaedc3fb0b5ada21943e06df1a24148f5b0867459770"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.516712 5007 generic.go:334] "Generic (PLEG): container finished" podID="923f04c2-f4bb-4ba8-8b06-50e4cda22028" containerID="19597b0459187abe0165d2adad8c292f845eed1dad825cfc117fefb74ae900ed" exitCode=0 Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.516762 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-btd2f" event={"ID":"923f04c2-f4bb-4ba8-8b06-50e4cda22028","Type":"ContainerDied","Data":"19597b0459187abe0165d2adad8c292f845eed1dad825cfc117fefb74ae900ed"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.516784 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-btd2f" event={"ID":"923f04c2-f4bb-4ba8-8b06-50e4cda22028","Type":"ContainerStarted","Data":"9bf201057671b333aaa3fceebeaf13e21b3e428e456be82e69f5170e2c802adc"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.518824 5007 generic.go:334] "Generic (PLEG): container finished" podID="eade00fd-e7cb-4213-bc67-a0c2cec0cc13" containerID="6572d32875d3676d07b76a344e136840d919b7b85966dbaf5b83d2a8272d9e18" exitCode=0 Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.518866 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3afb-account-create-update-ckj4v" event={"ID":"eade00fd-e7cb-4213-bc67-a0c2cec0cc13","Type":"ContainerDied","Data":"6572d32875d3676d07b76a344e136840d919b7b85966dbaf5b83d2a8272d9e18"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.520745 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xccwz" event={"ID":"c8e4fea3-ff28-4966-964d-31c16c2a5c13","Type":"ContainerStarted","Data":"fbf474ba2cedc44393aa26de147eaf4e3512ff8dcc928f656017658f85bd29a5"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.523838 5007 generic.go:334] "Generic (PLEG): container finished" podID="65bb66ce-376a-4f9a-9202-1d122f4fd3a5" containerID="4369676ad53621869bcf2b6e2de3f304f9039df8eaa5c01ca8ce7707a8806728" exitCode=0 Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.523941 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-s7llx" event={"ID":"65bb66ce-376a-4f9a-9202-1d122f4fd3a5","Type":"ContainerDied","Data":"4369676ad53621869bcf2b6e2de3f304f9039df8eaa5c01ca8ce7707a8806728"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.536547 5007 generic.go:334] "Generic (PLEG): container finished" podID="1e0510ca-f7a8-437b-9625-f6be614e61cb" containerID="cc2481f4017b2e290e14eb84787608ef55729eb3eb8170844ce53e955ad03b4d" exitCode=0 Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.536636 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rf7bq" event={"ID":"1e0510ca-f7a8-437b-9625-f6be614e61cb","Type":"ContainerDied","Data":"cc2481f4017b2e290e14eb84787608ef55729eb3eb8170844ce53e955ad03b4d"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.536660 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rf7bq" event={"ID":"1e0510ca-f7a8-437b-9625-f6be614e61cb","Type":"ContainerStarted","Data":"9df5b60fb68ff48f1dd082d87eed6a24b92137682d94a26b0ef41d383499797d"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.540786 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-905c-account-create-update-nmdg8" event={"ID":"822282ba-58c6-4182-88e3-1f097b9d912b","Type":"ContainerStarted","Data":"9eaeb9f4a0583d11fad85d70694bf7942af7d38df7f7e3c7c5b61ff9aad57bd2"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.540827 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-905c-account-create-update-nmdg8" event={"ID":"822282ba-58c6-4182-88e3-1f097b9d912b","Type":"ContainerStarted","Data":"2e89f7792b17e5c9010a8f4ec7e1dd45d01d174c2661fe04742a0a316ddc0026"} Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.597680 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-48e7-account-create-update-sczjm" podStartSLOduration=2.59766298 podStartE2EDuration="2.59766298s" podCreationTimestamp="2026-02-18 19:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:58.595741335 +0000 UTC m=+1043.377860859" watchObservedRunningTime="2026-02-18 19:55:58.59766298 +0000 UTC m=+1043.379782504" Feb 18 19:55:58 crc kubenswrapper[5007]: I0218 19:55:58.711028 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-905c-account-create-update-nmdg8" podStartSLOduration=2.711002348 podStartE2EDuration="2.711002348s" podCreationTimestamp="2026-02-18 19:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:58.667706625 +0000 UTC m=+1043.449826159" watchObservedRunningTime="2026-02-18 19:55:58.711002348 +0000 UTC m=+1043.493121872" Feb 18 19:55:59 crc kubenswrapper[5007]: I0218 19:55:59.556349 5007 generic.go:334] "Generic (PLEG): container finished" podID="8f87bd0d-a9a8-4009-a7f3-54429ed28348" containerID="4ef39e138cf0d8196912720f3b0fb777fdd29750a558037e7cbe4fd632e155ae" exitCode=0 Feb 18 19:55:59 crc kubenswrapper[5007]: I0218 19:55:59.556725 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48e7-account-create-update-sczjm" event={"ID":"8f87bd0d-a9a8-4009-a7f3-54429ed28348","Type":"ContainerDied","Data":"4ef39e138cf0d8196912720f3b0fb777fdd29750a558037e7cbe4fd632e155ae"} Feb 18 19:55:59 crc kubenswrapper[5007]: I0218 19:55:59.561498 5007 generic.go:334] "Generic (PLEG): container finished" podID="822282ba-58c6-4182-88e3-1f097b9d912b" containerID="9eaeb9f4a0583d11fad85d70694bf7942af7d38df7f7e3c7c5b61ff9aad57bd2" exitCode=0 Feb 18 19:55:59 crc kubenswrapper[5007]: I0218 19:55:59.561540 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-905c-account-create-update-nmdg8" event={"ID":"822282ba-58c6-4182-88e3-1f097b9d912b","Type":"ContainerDied","Data":"9eaeb9f4a0583d11fad85d70694bf7942af7d38df7f7e3c7c5b61ff9aad57bd2"} Feb 18 19:55:59 crc kubenswrapper[5007]: I0218 19:55:59.566882 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" event={"ID":"5f8c2991-3116-46e0-b302-397291db1589","Type":"ContainerStarted","Data":"9ba94a3f48a4ea3c1686b42193961341f636681296c9030bdf268c1d9b5034d2"} Feb 18 19:55:59 crc kubenswrapper[5007]: I0218 19:55:59.567136 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:55:59 crc kubenswrapper[5007]: I0218 19:55:59.610405 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" podStartSLOduration=4.610379549 podStartE2EDuration="4.610379549s" podCreationTimestamp="2026-02-18 19:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:59.585845021 +0000 UTC m=+1044.367964545" watchObservedRunningTime="2026-02-18 19:55:59.610379549 +0000 UTC m=+1044.392499103" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.005378 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-btd2f" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.129522 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgjvp\" (UniqueName: \"kubernetes.io/projected/923f04c2-f4bb-4ba8-8b06-50e4cda22028-kube-api-access-cgjvp\") pod \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\" (UID: \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\") " Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.129641 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923f04c2-f4bb-4ba8-8b06-50e4cda22028-operator-scripts\") pod \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\" (UID: \"923f04c2-f4bb-4ba8-8b06-50e4cda22028\") " Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.130443 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923f04c2-f4bb-4ba8-8b06-50e4cda22028-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "923f04c2-f4bb-4ba8-8b06-50e4cda22028" (UID: "923f04c2-f4bb-4ba8-8b06-50e4cda22028"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.135478 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/923f04c2-f4bb-4ba8-8b06-50e4cda22028-kube-api-access-cgjvp" (OuterVolumeSpecName: "kube-api-access-cgjvp") pod "923f04c2-f4bb-4ba8-8b06-50e4cda22028" (UID: "923f04c2-f4bb-4ba8-8b06-50e4cda22028"). InnerVolumeSpecName "kube-api-access-cgjvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.135795 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rf7bq" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.144268 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.160313 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-s7llx" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232061 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvd5p\" (UniqueName: \"kubernetes.io/projected/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-kube-api-access-pvd5p\") pod \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\" (UID: \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\") " Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232166 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2pfm\" (UniqueName: \"kubernetes.io/projected/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-kube-api-access-x2pfm\") pod \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\" (UID: \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\") " Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232199 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-operator-scripts\") pod \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\" (UID: \"65bb66ce-376a-4f9a-9202-1d122f4fd3a5\") " Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232248 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9zj7\" (UniqueName: \"kubernetes.io/projected/1e0510ca-f7a8-437b-9625-f6be614e61cb-kube-api-access-j9zj7\") pod \"1e0510ca-f7a8-437b-9625-f6be614e61cb\" (UID: \"1e0510ca-f7a8-437b-9625-f6be614e61cb\") " Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232272 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-operator-scripts\") pod \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\" (UID: \"eade00fd-e7cb-4213-bc67-a0c2cec0cc13\") " Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232309 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e0510ca-f7a8-437b-9625-f6be614e61cb-operator-scripts\") pod \"1e0510ca-f7a8-437b-9625-f6be614e61cb\" (UID: \"1e0510ca-f7a8-437b-9625-f6be614e61cb\") " Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232713 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923f04c2-f4bb-4ba8-8b06-50e4cda22028-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232732 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgjvp\" (UniqueName: \"kubernetes.io/projected/923f04c2-f4bb-4ba8-8b06-50e4cda22028-kube-api-access-cgjvp\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232796 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65bb66ce-376a-4f9a-9202-1d122f4fd3a5" (UID: "65bb66ce-376a-4f9a-9202-1d122f4fd3a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232806 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e0510ca-f7a8-437b-9625-f6be614e61cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e0510ca-f7a8-437b-9625-f6be614e61cb" (UID: "1e0510ca-f7a8-437b-9625-f6be614e61cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.232909 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eade00fd-e7cb-4213-bc67-a0c2cec0cc13" (UID: "eade00fd-e7cb-4213-bc67-a0c2cec0cc13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.236689 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-kube-api-access-x2pfm" (OuterVolumeSpecName: "kube-api-access-x2pfm") pod "eade00fd-e7cb-4213-bc67-a0c2cec0cc13" (UID: "eade00fd-e7cb-4213-bc67-a0c2cec0cc13"). InnerVolumeSpecName "kube-api-access-x2pfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.236999 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0510ca-f7a8-437b-9625-f6be614e61cb-kube-api-access-j9zj7" (OuterVolumeSpecName: "kube-api-access-j9zj7") pod "1e0510ca-f7a8-437b-9625-f6be614e61cb" (UID: "1e0510ca-f7a8-437b-9625-f6be614e61cb"). InnerVolumeSpecName "kube-api-access-j9zj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.237043 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-kube-api-access-pvd5p" (OuterVolumeSpecName: "kube-api-access-pvd5p") pod "65bb66ce-376a-4f9a-9202-1d122f4fd3a5" (UID: "65bb66ce-376a-4f9a-9202-1d122f4fd3a5"). InnerVolumeSpecName "kube-api-access-pvd5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.334217 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2pfm\" (UniqueName: \"kubernetes.io/projected/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-kube-api-access-x2pfm\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.334246 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.334257 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9zj7\" (UniqueName: \"kubernetes.io/projected/1e0510ca-f7a8-437b-9625-f6be614e61cb-kube-api-access-j9zj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.334266 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eade00fd-e7cb-4213-bc67-a0c2cec0cc13-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.334274 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e0510ca-f7a8-437b-9625-f6be614e61cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.334283 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvd5p\" (UniqueName: \"kubernetes.io/projected/65bb66ce-376a-4f9a-9202-1d122f4fd3a5-kube-api-access-pvd5p\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.576759 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-btd2f" event={"ID":"923f04c2-f4bb-4ba8-8b06-50e4cda22028","Type":"ContainerDied","Data":"9bf201057671b333aaa3fceebeaf13e21b3e428e456be82e69f5170e2c802adc"} Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.576788 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-btd2f" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.576812 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf201057671b333aaa3fceebeaf13e21b3e428e456be82e69f5170e2c802adc" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.578519 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3afb-account-create-update-ckj4v" event={"ID":"eade00fd-e7cb-4213-bc67-a0c2cec0cc13","Type":"ContainerDied","Data":"c4ed217267f9f6119f82448a5ace850d86f77c71c8a7c143ffa271ae5d751a4c"} Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.578591 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ed217267f9f6119f82448a5ace850d86f77c71c8a7c143ffa271ae5d751a4c" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.578651 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3afb-account-create-update-ckj4v" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.582504 5007 generic.go:334] "Generic (PLEG): container finished" podID="e54a67f1-3156-43d6-b428-62e305e3bc7b" containerID="28efe183548b02d593e64047fc3f898e736d3878c42a7d78e5ee141a8d663941" exitCode=0 Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.582589 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9m4t7" event={"ID":"e54a67f1-3156-43d6-b428-62e305e3bc7b","Type":"ContainerDied","Data":"28efe183548b02d593e64047fc3f898e736d3878c42a7d78e5ee141a8d663941"} Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.588281 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-s7llx" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.588803 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-s7llx" event={"ID":"65bb66ce-376a-4f9a-9202-1d122f4fd3a5","Type":"ContainerDied","Data":"e2e2dadcf2c5c39c014becd167ee2930ec52f6e1ac9f8dd73e98f2b896f189bd"} Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.588834 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2e2dadcf2c5c39c014becd167ee2930ec52f6e1ac9f8dd73e98f2b896f189bd" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.590616 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rf7bq" event={"ID":"1e0510ca-f7a8-437b-9625-f6be614e61cb","Type":"ContainerDied","Data":"9df5b60fb68ff48f1dd082d87eed6a24b92137682d94a26b0ef41d383499797d"} Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.590657 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df5b60fb68ff48f1dd082d87eed6a24b92137682d94a26b0ef41d383499797d" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.590712 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rf7bq" Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.595522 5007 generic.go:334] "Generic (PLEG): container finished" podID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerID="122d1d7c28bd2f43f71548ba14d009c3a916998e5da057bef7e9685c47871213" exitCode=0 Feb 18 19:56:00 crc kubenswrapper[5007]: I0218 19:56:00.596656 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerDied","Data":"122d1d7c28bd2f43f71548ba14d009c3a916998e5da057bef7e9685c47871213"} Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.600346 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.614023 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.637740 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-905c-account-create-update-nmdg8" event={"ID":"822282ba-58c6-4182-88e3-1f097b9d912b","Type":"ContainerDied","Data":"2e89f7792b17e5c9010a8f4ec7e1dd45d01d174c2661fe04742a0a316ddc0026"} Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.637792 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-905c-account-create-update-nmdg8" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.637796 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e89f7792b17e5c9010a8f4ec7e1dd45d01d174c2661fe04742a0a316ddc0026" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.644607 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48e7-account-create-update-sczjm" event={"ID":"8f87bd0d-a9a8-4009-a7f3-54429ed28348","Type":"ContainerDied","Data":"cfdcf50fbc473a7ecd6aeaedc3fb0b5ada21943e06df1a24148f5b0867459770"} Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.644643 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfdcf50fbc473a7ecd6aeaedc3fb0b5ada21943e06df1a24148f5b0867459770" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.644713 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48e7-account-create-update-sczjm" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.704746 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftshd\" (UniqueName: \"kubernetes.io/projected/822282ba-58c6-4182-88e3-1f097b9d912b-kube-api-access-ftshd\") pod \"822282ba-58c6-4182-88e3-1f097b9d912b\" (UID: \"822282ba-58c6-4182-88e3-1f097b9d912b\") " Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.704926 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f87bd0d-a9a8-4009-a7f3-54429ed28348-operator-scripts\") pod \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\" (UID: \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\") " Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.705017 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mb65\" (UniqueName: \"kubernetes.io/projected/8f87bd0d-a9a8-4009-a7f3-54429ed28348-kube-api-access-6mb65\") pod \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\" (UID: \"8f87bd0d-a9a8-4009-a7f3-54429ed28348\") " Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.705072 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822282ba-58c6-4182-88e3-1f097b9d912b-operator-scripts\") pod \"822282ba-58c6-4182-88e3-1f097b9d912b\" (UID: \"822282ba-58c6-4182-88e3-1f097b9d912b\") " Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.705677 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f87bd0d-a9a8-4009-a7f3-54429ed28348-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f87bd0d-a9a8-4009-a7f3-54429ed28348" (UID: "8f87bd0d-a9a8-4009-a7f3-54429ed28348"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.705736 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/822282ba-58c6-4182-88e3-1f097b9d912b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "822282ba-58c6-4182-88e3-1f097b9d912b" (UID: "822282ba-58c6-4182-88e3-1f097b9d912b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.708824 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f87bd0d-a9a8-4009-a7f3-54429ed28348-kube-api-access-6mb65" (OuterVolumeSpecName: "kube-api-access-6mb65") pod "8f87bd0d-a9a8-4009-a7f3-54429ed28348" (UID: "8f87bd0d-a9a8-4009-a7f3-54429ed28348"). InnerVolumeSpecName "kube-api-access-6mb65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.711182 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822282ba-58c6-4182-88e3-1f097b9d912b-kube-api-access-ftshd" (OuterVolumeSpecName: "kube-api-access-ftshd") pod "822282ba-58c6-4182-88e3-1f097b9d912b" (UID: "822282ba-58c6-4182-88e3-1f097b9d912b"). InnerVolumeSpecName "kube-api-access-ftshd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.806836 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f87bd0d-a9a8-4009-a7f3-54429ed28348-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.806874 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mb65\" (UniqueName: \"kubernetes.io/projected/8f87bd0d-a9a8-4009-a7f3-54429ed28348-kube-api-access-6mb65\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.806887 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822282ba-58c6-4182-88e3-1f097b9d912b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:03 crc kubenswrapper[5007]: I0218 19:56:03.806900 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftshd\" (UniqueName: \"kubernetes.io/projected/822282ba-58c6-4182-88e3-1f097b9d912b-kube-api-access-ftshd\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:06 crc kubenswrapper[5007]: I0218 19:56:06.237615 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:56:06 crc kubenswrapper[5007]: I0218 19:56:06.304707 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-pzcjs"] Feb 18 19:56:06 crc kubenswrapper[5007]: I0218 19:56:06.304941 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" podUID="8659fece-e93f-4032-9050-659bc594f86b" containerName="dnsmasq-dns" containerID="cri-o://6a2cdb9c89efabe8b4e4dc262bb9b9a7aa539bfa446ec435f788a22351f1a1d1" gracePeriod=10 Feb 18 19:56:06 crc kubenswrapper[5007]: I0218 19:56:06.689379 5007 generic.go:334] "Generic (PLEG): container finished" podID="8659fece-e93f-4032-9050-659bc594f86b" containerID="6a2cdb9c89efabe8b4e4dc262bb9b9a7aa539bfa446ec435f788a22351f1a1d1" exitCode=0 Feb 18 19:56:06 crc kubenswrapper[5007]: I0218 19:56:06.689424 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" event={"ID":"8659fece-e93f-4032-9050-659bc594f86b","Type":"ContainerDied","Data":"6a2cdb9c89efabe8b4e4dc262bb9b9a7aa539bfa446ec435f788a22351f1a1d1"} Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.716936 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9m4t7" event={"ID":"e54a67f1-3156-43d6-b428-62e305e3bc7b","Type":"ContainerDied","Data":"c0e6ec7cb8285062667a03c2e8f0f51762a98266e0e1783e895772cc89a831f5"} Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.717462 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e6ec7cb8285062667a03c2e8f0f51762a98266e0e1783e895772cc89a831f5" Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.841743 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9m4t7" Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.904996 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.906090 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2tmh\" (UniqueName: \"kubernetes.io/projected/e54a67f1-3156-43d6-b428-62e305e3bc7b-kube-api-access-m2tmh\") pod \"e54a67f1-3156-43d6-b428-62e305e3bc7b\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.906220 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-db-sync-config-data\") pod \"e54a67f1-3156-43d6-b428-62e305e3bc7b\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.906258 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-config-data\") pod \"e54a67f1-3156-43d6-b428-62e305e3bc7b\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.906353 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-combined-ca-bundle\") pod \"e54a67f1-3156-43d6-b428-62e305e3bc7b\" (UID: \"e54a67f1-3156-43d6-b428-62e305e3bc7b\") " Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.916487 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e54a67f1-3156-43d6-b428-62e305e3bc7b" (UID: "e54a67f1-3156-43d6-b428-62e305e3bc7b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:08 crc kubenswrapper[5007]: I0218 19:56:08.917099 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54a67f1-3156-43d6-b428-62e305e3bc7b-kube-api-access-m2tmh" (OuterVolumeSpecName: "kube-api-access-m2tmh") pod "e54a67f1-3156-43d6-b428-62e305e3bc7b" (UID: "e54a67f1-3156-43d6-b428-62e305e3bc7b"). InnerVolumeSpecName "kube-api-access-m2tmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.008448 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-config\") pod \"8659fece-e93f-4032-9050-659bc594f86b\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.008511 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm4zx\" (UniqueName: \"kubernetes.io/projected/8659fece-e93f-4032-9050-659bc594f86b-kube-api-access-jm4zx\") pod \"8659fece-e93f-4032-9050-659bc594f86b\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.008590 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-nb\") pod \"8659fece-e93f-4032-9050-659bc594f86b\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.008694 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-sb\") pod \"8659fece-e93f-4032-9050-659bc594f86b\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.008763 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-dns-svc\") pod \"8659fece-e93f-4032-9050-659bc594f86b\" (UID: \"8659fece-e93f-4032-9050-659bc594f86b\") " Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.009251 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2tmh\" (UniqueName: \"kubernetes.io/projected/e54a67f1-3156-43d6-b428-62e305e3bc7b-kube-api-access-m2tmh\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.009270 5007 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.016790 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8659fece-e93f-4032-9050-659bc594f86b-kube-api-access-jm4zx" (OuterVolumeSpecName: "kube-api-access-jm4zx") pod "8659fece-e93f-4032-9050-659bc594f86b" (UID: "8659fece-e93f-4032-9050-659bc594f86b"). InnerVolumeSpecName "kube-api-access-jm4zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.031766 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e54a67f1-3156-43d6-b428-62e305e3bc7b" (UID: "e54a67f1-3156-43d6-b428-62e305e3bc7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.054545 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-config-data" (OuterVolumeSpecName: "config-data") pod "e54a67f1-3156-43d6-b428-62e305e3bc7b" (UID: "e54a67f1-3156-43d6-b428-62e305e3bc7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.068093 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-config" (OuterVolumeSpecName: "config") pod "8659fece-e93f-4032-9050-659bc594f86b" (UID: "8659fece-e93f-4032-9050-659bc594f86b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.069209 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8659fece-e93f-4032-9050-659bc594f86b" (UID: "8659fece-e93f-4032-9050-659bc594f86b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.082095 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8659fece-e93f-4032-9050-659bc594f86b" (UID: "8659fece-e93f-4032-9050-659bc594f86b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.091122 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8659fece-e93f-4032-9050-659bc594f86b" (UID: "8659fece-e93f-4032-9050-659bc594f86b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.111569 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.111609 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.111621 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.111632 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.111646 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54a67f1-3156-43d6-b428-62e305e3bc7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.111658 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8659fece-e93f-4032-9050-659bc594f86b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.111669 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm4zx\" (UniqueName: \"kubernetes.io/projected/8659fece-e93f-4032-9050-659bc594f86b-kube-api-access-jm4zx\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.727732 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xccwz" event={"ID":"c8e4fea3-ff28-4966-964d-31c16c2a5c13","Type":"ContainerStarted","Data":"fd1cbff3f36c02a66920b807f52ec8f8b53a6cb3059a14698ff226b8e63f673b"} Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.730121 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-kwq4c" event={"ID":"18a90b81-177c-40e6-8b75-be61fe81187d","Type":"ContainerStarted","Data":"0d8ab64e64a4c01ac65126e1b229ef053c97b9cc5a443e89bbc4a2abbc22b424"} Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.732115 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" event={"ID":"8659fece-e93f-4032-9050-659bc594f86b","Type":"ContainerDied","Data":"f16d49881219ac9b71743cdbf235634c401322ce95c2b5a7ff4ccf13b9066b6f"} Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.732159 5007 scope.go:117] "RemoveContainer" containerID="6a2cdb9c89efabe8b4e4dc262bb9b9a7aa539bfa446ec435f788a22351f1a1d1" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.732176 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c566df67-pzcjs" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.734667 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9m4t7" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.735864 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerStarted","Data":"8cd3df79fcd3bf1da6f2924afe014d1f12b702b4586117d889dc7aab808e10b0"} Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.759646 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xccwz" podStartSLOduration=3.01622963 podStartE2EDuration="13.759628696s" podCreationTimestamp="2026-02-18 19:55:56 +0000 UTC" firstStartedPulling="2026-02-18 19:55:57.930700727 +0000 UTC m=+1042.712820251" lastFinishedPulling="2026-02-18 19:56:08.674099783 +0000 UTC m=+1053.456219317" observedRunningTime="2026-02-18 19:56:09.753703737 +0000 UTC m=+1054.535823271" watchObservedRunningTime="2026-02-18 19:56:09.759628696 +0000 UTC m=+1054.541748220" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.764392 5007 scope.go:117] "RemoveContainer" containerID="a506f4f3a618ed9b05a0b0e06053ff58f5a916ba4d4f824c54678ec2561cb270" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.775496 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-kwq4c" podStartSLOduration=2.356056446 podStartE2EDuration="12.775473177s" podCreationTimestamp="2026-02-18 19:55:57 +0000 UTC" firstStartedPulling="2026-02-18 19:55:58.277235005 +0000 UTC m=+1043.059354529" lastFinishedPulling="2026-02-18 19:56:08.696651726 +0000 UTC m=+1053.478771260" observedRunningTime="2026-02-18 19:56:09.771823323 +0000 UTC m=+1054.553942847" watchObservedRunningTime="2026-02-18 19:56:09.775473177 +0000 UTC m=+1054.557592711" Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.807334 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-pzcjs"] Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.818190 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c566df67-pzcjs"] Feb 18 19:56:09 crc kubenswrapper[5007]: I0218 19:56:09.920332 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8659fece-e93f-4032-9050-659bc594f86b" path="/var/lib/kubelet/pods/8659fece-e93f-4032-9050-659bc594f86b/volumes" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.264765 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7984c7b7b9-n886b"] Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265160 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54a67f1-3156-43d6-b428-62e305e3bc7b" containerName="glance-db-sync" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265179 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54a67f1-3156-43d6-b428-62e305e3bc7b" containerName="glance-db-sync" Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265191 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8659fece-e93f-4032-9050-659bc594f86b" containerName="init" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265199 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8659fece-e93f-4032-9050-659bc594f86b" containerName="init" Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265212 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8659fece-e93f-4032-9050-659bc594f86b" containerName="dnsmasq-dns" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265412 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8659fece-e93f-4032-9050-659bc594f86b" containerName="dnsmasq-dns" Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265453 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65bb66ce-376a-4f9a-9202-1d122f4fd3a5" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265461 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="65bb66ce-376a-4f9a-9202-1d122f4fd3a5" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265478 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822282ba-58c6-4182-88e3-1f097b9d912b" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265484 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="822282ba-58c6-4182-88e3-1f097b9d912b" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265494 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f87bd0d-a9a8-4009-a7f3-54429ed28348" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265501 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f87bd0d-a9a8-4009-a7f3-54429ed28348" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265515 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eade00fd-e7cb-4213-bc67-a0c2cec0cc13" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265524 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="eade00fd-e7cb-4213-bc67-a0c2cec0cc13" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265554 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923f04c2-f4bb-4ba8-8b06-50e4cda22028" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265563 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="923f04c2-f4bb-4ba8-8b06-50e4cda22028" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: E0218 19:56:10.265572 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0510ca-f7a8-437b-9625-f6be614e61cb" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265579 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0510ca-f7a8-437b-9625-f6be614e61cb" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265770 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8659fece-e93f-4032-9050-659bc594f86b" containerName="dnsmasq-dns" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265804 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54a67f1-3156-43d6-b428-62e305e3bc7b" containerName="glance-db-sync" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265822 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f87bd0d-a9a8-4009-a7f3-54429ed28348" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265847 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="eade00fd-e7cb-4213-bc67-a0c2cec0cc13" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265856 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="822282ba-58c6-4182-88e3-1f097b9d912b" containerName="mariadb-account-create-update" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265872 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0510ca-f7a8-437b-9625-f6be614e61cb" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265894 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="923f04c2-f4bb-4ba8-8b06-50e4cda22028" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.265909 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="65bb66ce-376a-4f9a-9202-1d122f4fd3a5" containerName="mariadb-database-create" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.267030 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.284235 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7984c7b7b9-n886b"] Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.342839 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-nb\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.342906 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-config\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.342943 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-svc\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.342985 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-swift-storage-0\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.343020 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-sb\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.343123 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndv6g\" (UniqueName: \"kubernetes.io/projected/6146f8f3-f71e-4b93-9118-f003eefce996-kube-api-access-ndv6g\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.444591 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndv6g\" (UniqueName: \"kubernetes.io/projected/6146f8f3-f71e-4b93-9118-f003eefce996-kube-api-access-ndv6g\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.445330 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-nb\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.445501 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-config\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.446160 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-svc\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.446221 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-nb\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.446354 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-swift-storage-0\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.446516 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-sb\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.446550 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-config\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.446605 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-svc\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.447241 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-sb\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.447499 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-swift-storage-0\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.539884 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndv6g\" (UniqueName: \"kubernetes.io/projected/6146f8f3-f71e-4b93-9118-f003eefce996-kube-api-access-ndv6g\") pod \"dnsmasq-dns-7984c7b7b9-n886b\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:10 crc kubenswrapper[5007]: I0218 19:56:10.587937 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:11 crc kubenswrapper[5007]: I0218 19:56:11.216361 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7984c7b7b9-n886b"] Feb 18 19:56:11 crc kubenswrapper[5007]: W0218 19:56:11.221858 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6146f8f3_f71e_4b93_9118_f003eefce996.slice/crio-c70401ac822b6e1312c97284f9b176f3240ae601e478e30f65a88e3927d46b57 WatchSource:0}: Error finding container c70401ac822b6e1312c97284f9b176f3240ae601e478e30f65a88e3927d46b57: Status 404 returned error can't find the container with id c70401ac822b6e1312c97284f9b176f3240ae601e478e30f65a88e3927d46b57 Feb 18 19:56:11 crc kubenswrapper[5007]: I0218 19:56:11.752714 5007 generic.go:334] "Generic (PLEG): container finished" podID="6146f8f3-f71e-4b93-9118-f003eefce996" containerID="a6bfedb5a3fd06ae7671b186f0e3c85ed5f5951e951cb7ddaae58c8d1e094ba5" exitCode=0 Feb 18 19:56:11 crc kubenswrapper[5007]: I0218 19:56:11.752913 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" event={"ID":"6146f8f3-f71e-4b93-9118-f003eefce996","Type":"ContainerDied","Data":"a6bfedb5a3fd06ae7671b186f0e3c85ed5f5951e951cb7ddaae58c8d1e094ba5"} Feb 18 19:56:11 crc kubenswrapper[5007]: I0218 19:56:11.753084 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" event={"ID":"6146f8f3-f71e-4b93-9118-f003eefce996","Type":"ContainerStarted","Data":"c70401ac822b6e1312c97284f9b176f3240ae601e478e30f65a88e3927d46b57"} Feb 18 19:56:12 crc kubenswrapper[5007]: I0218 19:56:12.776907 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" event={"ID":"6146f8f3-f71e-4b93-9118-f003eefce996","Type":"ContainerStarted","Data":"d6689d4c7125ce3ff7d7e2d39743f1c52f63ce790d416c33c3777e77a471a6fd"} Feb 18 19:56:12 crc kubenswrapper[5007]: I0218 19:56:12.776978 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:12 crc kubenswrapper[5007]: I0218 19:56:12.784883 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerStarted","Data":"fec0b89c24b9a631e73a6a5012ca4d7b7bb83658c59b1c99f48123b4c08244e3"} Feb 18 19:56:12 crc kubenswrapper[5007]: I0218 19:56:12.784933 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerStarted","Data":"6899d09b427bdcc8aa8c975b2b37d0e9e34eac54fa330a7359811c8678982450"} Feb 18 19:56:12 crc kubenswrapper[5007]: I0218 19:56:12.814553 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" podStartSLOduration=2.814520229 podStartE2EDuration="2.814520229s" podCreationTimestamp="2026-02-18 19:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:12.804955146 +0000 UTC m=+1057.587074710" watchObservedRunningTime="2026-02-18 19:56:12.814520229 +0000 UTC m=+1057.596639763" Feb 18 19:56:12 crc kubenswrapper[5007]: I0218 19:56:12.844480 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.84442691 podStartE2EDuration="23.84442691s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:12.836773012 +0000 UTC m=+1057.618892566" watchObservedRunningTime="2026-02-18 19:56:12.84442691 +0000 UTC m=+1057.626546474" Feb 18 19:56:13 crc kubenswrapper[5007]: I0218 19:56:13.793623 5007 generic.go:334] "Generic (PLEG): container finished" podID="18a90b81-177c-40e6-8b75-be61fe81187d" containerID="0d8ab64e64a4c01ac65126e1b229ef053c97b9cc5a443e89bbc4a2abbc22b424" exitCode=0 Feb 18 19:56:13 crc kubenswrapper[5007]: I0218 19:56:13.793708 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-kwq4c" event={"ID":"18a90b81-177c-40e6-8b75-be61fe81187d","Type":"ContainerDied","Data":"0d8ab64e64a4c01ac65126e1b229ef053c97b9cc5a443e89bbc4a2abbc22b424"} Feb 18 19:56:14 crc kubenswrapper[5007]: I0218 19:56:14.759235 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:56:14 crc kubenswrapper[5007]: I0218 19:56:14.809397 5007 generic.go:334] "Generic (PLEG): container finished" podID="c8e4fea3-ff28-4966-964d-31c16c2a5c13" containerID="fd1cbff3f36c02a66920b807f52ec8f8b53a6cb3059a14698ff226b8e63f673b" exitCode=0 Feb 18 19:56:14 crc kubenswrapper[5007]: I0218 19:56:14.809524 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xccwz" event={"ID":"c8e4fea3-ff28-4966-964d-31c16c2a5c13","Type":"ContainerDied","Data":"fd1cbff3f36c02a66920b807f52ec8f8b53a6cb3059a14698ff226b8e63f673b"} Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.151180 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.233887 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-db-sync-config-data\") pod \"18a90b81-177c-40e6-8b75-be61fe81187d\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.234089 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-config-data\") pod \"18a90b81-177c-40e6-8b75-be61fe81187d\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.234233 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6498\" (UniqueName: \"kubernetes.io/projected/18a90b81-177c-40e6-8b75-be61fe81187d-kube-api-access-n6498\") pod \"18a90b81-177c-40e6-8b75-be61fe81187d\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.234273 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-combined-ca-bundle\") pod \"18a90b81-177c-40e6-8b75-be61fe81187d\" (UID: \"18a90b81-177c-40e6-8b75-be61fe81187d\") " Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.240231 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "18a90b81-177c-40e6-8b75-be61fe81187d" (UID: "18a90b81-177c-40e6-8b75-be61fe81187d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.240263 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a90b81-177c-40e6-8b75-be61fe81187d-kube-api-access-n6498" (OuterVolumeSpecName: "kube-api-access-n6498") pod "18a90b81-177c-40e6-8b75-be61fe81187d" (UID: "18a90b81-177c-40e6-8b75-be61fe81187d"). InnerVolumeSpecName "kube-api-access-n6498". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.264651 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18a90b81-177c-40e6-8b75-be61fe81187d" (UID: "18a90b81-177c-40e6-8b75-be61fe81187d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.287606 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-config-data" (OuterVolumeSpecName: "config-data") pod "18a90b81-177c-40e6-8b75-be61fe81187d" (UID: "18a90b81-177c-40e6-8b75-be61fe81187d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.336188 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.336231 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6498\" (UniqueName: \"kubernetes.io/projected/18a90b81-177c-40e6-8b75-be61fe81187d-kube-api-access-n6498\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.336250 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.336267 5007 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18a90b81-177c-40e6-8b75-be61fe81187d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.822773 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-kwq4c" Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.822923 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-kwq4c" event={"ID":"18a90b81-177c-40e6-8b75-be61fe81187d","Type":"ContainerDied","Data":"0cdc0b1a4372221f695a2a3131cdbffad4ea0b8e6908d4ef0a3d7c9fd60119ef"} Feb 18 19:56:15 crc kubenswrapper[5007]: I0218 19:56:15.824626 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cdc0b1a4372221f695a2a3131cdbffad4ea0b8e6908d4ef0a3d7c9fd60119ef" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.209503 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xccwz" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.255847 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-config-data\") pod \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.255958 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kl9b\" (UniqueName: \"kubernetes.io/projected/c8e4fea3-ff28-4966-964d-31c16c2a5c13-kube-api-access-2kl9b\") pod \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.255999 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-combined-ca-bundle\") pod \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\" (UID: \"c8e4fea3-ff28-4966-964d-31c16c2a5c13\") " Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.259167 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e4fea3-ff28-4966-964d-31c16c2a5c13-kube-api-access-2kl9b" (OuterVolumeSpecName: "kube-api-access-2kl9b") pod "c8e4fea3-ff28-4966-964d-31c16c2a5c13" (UID: "c8e4fea3-ff28-4966-964d-31c16c2a5c13"). InnerVolumeSpecName "kube-api-access-2kl9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.278033 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8e4fea3-ff28-4966-964d-31c16c2a5c13" (UID: "c8e4fea3-ff28-4966-964d-31c16c2a5c13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.291861 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-config-data" (OuterVolumeSpecName: "config-data") pod "c8e4fea3-ff28-4966-964d-31c16c2a5c13" (UID: "c8e4fea3-ff28-4966-964d-31c16c2a5c13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.358585 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.358629 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kl9b\" (UniqueName: \"kubernetes.io/projected/c8e4fea3-ff28-4966-964d-31c16c2a5c13-kube-api-access-2kl9b\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.358644 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8e4fea3-ff28-4966-964d-31c16c2a5c13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.832356 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xccwz" event={"ID":"c8e4fea3-ff28-4966-964d-31c16c2a5c13","Type":"ContainerDied","Data":"fbf474ba2cedc44393aa26de147eaf4e3512ff8dcc928f656017658f85bd29a5"} Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.832411 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf474ba2cedc44393aa26de147eaf4e3512ff8dcc928f656017658f85bd29a5" Feb 18 19:56:16 crc kubenswrapper[5007]: I0218 19:56:16.832412 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xccwz" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.040617 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7984c7b7b9-n886b"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.041223 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" podUID="6146f8f3-f71e-4b93-9118-f003eefce996" containerName="dnsmasq-dns" containerID="cri-o://d6689d4c7125ce3ff7d7e2d39743f1c52f63ce790d416c33c3777e77a471a6fd" gracePeriod=10 Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.045118 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.095389 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-v8nvk"] Feb 18 19:56:17 crc kubenswrapper[5007]: E0218 19:56:17.095724 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e4fea3-ff28-4966-964d-31c16c2a5c13" containerName="keystone-db-sync" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.095742 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e4fea3-ff28-4966-964d-31c16c2a5c13" containerName="keystone-db-sync" Feb 18 19:56:17 crc kubenswrapper[5007]: E0218 19:56:17.095768 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a90b81-177c-40e6-8b75-be61fe81187d" containerName="watcher-db-sync" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.095775 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a90b81-177c-40e6-8b75-be61fe81187d" containerName="watcher-db-sync" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.095933 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a90b81-177c-40e6-8b75-be61fe81187d" containerName="watcher-db-sync" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.095956 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e4fea3-ff28-4966-964d-31c16c2a5c13" containerName="keystone-db-sync" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.096527 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.139754 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.139957 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.140110 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.140291 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.140402 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sd9ml" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.154267 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-689568cbb5-7vrl6"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.156113 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.219753 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689568cbb5-7vrl6"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283589 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-swift-storage-0\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283641 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-fernet-keys\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283667 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-nb\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283695 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-svc\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283709 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-config\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283721 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-sb\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283744 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-scripts\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283771 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-combined-ca-bundle\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.283848 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-config-data\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.284033 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lxw\" (UniqueName: \"kubernetes.io/projected/5ea25634-1f3e-47e6-9672-3273d7358d44-kube-api-access-k4lxw\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.284068 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-credential-keys\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.284145 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcjhn\" (UniqueName: \"kubernetes.io/projected/5bc12d79-93c4-44fe-becf-e048bb8554c9-kube-api-access-zcjhn\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385251 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-scripts\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385300 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-combined-ca-bundle\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385365 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-config-data\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385398 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4lxw\" (UniqueName: \"kubernetes.io/projected/5ea25634-1f3e-47e6-9672-3273d7358d44-kube-api-access-k4lxw\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385421 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-credential-keys\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385487 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcjhn\" (UniqueName: \"kubernetes.io/projected/5bc12d79-93c4-44fe-becf-e048bb8554c9-kube-api-access-zcjhn\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385513 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-swift-storage-0\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385538 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-fernet-keys\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385558 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-nb\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385580 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-svc\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385596 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-config\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.385611 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-sb\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.386492 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-sb\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.386716 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-nb\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.386864 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-svc\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.387021 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-config\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.387460 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-swift-storage-0\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.395273 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-fernet-keys\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.402659 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-config-data\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.405981 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-credential-keys\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.411258 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-scripts\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.444046 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.445070 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.452446 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-combined-ca-bundle\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.467055 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4lxw\" (UniqueName: \"kubernetes.io/projected/5ea25634-1f3e-47e6-9672-3273d7358d44-kube-api-access-k4lxw\") pod \"keystone-bootstrap-v8nvk\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.492712 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.492994 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-vzw52" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.493114 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.493182 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6551271-9859-4f43-8a89-5c5da823ef56-logs\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.493248 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2fms\" (UniqueName: \"kubernetes.io/projected/a6551271-9859-4f43-8a89-5c5da823ef56-kube-api-access-f2fms\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.493415 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-config-data\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.493858 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcjhn\" (UniqueName: \"kubernetes.io/projected/5bc12d79-93c4-44fe-becf-e048bb8554c9-kube-api-access-zcjhn\") pod \"dnsmasq-dns-689568cbb5-7vrl6\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.519507 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.520977 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.526903 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.546858 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.548476 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.553531 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.562693 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.586875 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.596578 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.596639 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6551271-9859-4f43-8a89-5c5da823ef56-logs\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.596705 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2fms\" (UniqueName: \"kubernetes.io/projected/a6551271-9859-4f43-8a89-5c5da823ef56-kube-api-access-f2fms\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.596738 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-config-data\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.604637 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-config-data\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.604872 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6551271-9859-4f43-8a89-5c5da823ef56-logs\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.609148 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.633082 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.670180 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-44gls"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.671315 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.700474 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j4zws" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.700728 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.703246 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.703562 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-config-data\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.703840 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.703909 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.704065 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psr6p\" (UniqueName: \"kubernetes.io/projected/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-kube-api-access-psr6p\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.704240 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.704392 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.704548 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tvw\" (UniqueName: \"kubernetes.io/projected/bb9df403-a190-484e-8b58-2e33e44ddd85-kube-api-access-45tvw\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.704702 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.704854 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-logs\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.704943 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9df403-a190-484e-8b58-2e33e44ddd85-logs\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.712555 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2fms\" (UniqueName: \"kubernetes.io/projected/a6551271-9859-4f43-8a89-5c5da823ef56-kube-api-access-f2fms\") pod \"watcher-applier-0\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.715822 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.725181 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.783473 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68dccdd97-4vtx9"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.790988 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.804211 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.806976 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807026 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97f6\" (UniqueName: \"kubernetes.io/projected/8b4964c0-2913-437d-b391-ed607a8fa813-kube-api-access-s97f6\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807058 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807088 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-combined-ca-bundle\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807119 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45tvw\" (UniqueName: \"kubernetes.io/projected/bb9df403-a190-484e-8b58-2e33e44ddd85-kube-api-access-45tvw\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807166 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807267 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-logs\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807286 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9df403-a190-484e-8b58-2e33e44ddd85-logs\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807730 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-config\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807781 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-config-data\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807806 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807823 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.807854 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psr6p\" (UniqueName: \"kubernetes.io/projected/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-kube-api-access-psr6p\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.813316 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-logs\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.814631 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.817984 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.840227 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9df403-a190-484e-8b58-2e33e44ddd85-logs\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.840639 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.840880 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.841002 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-fp6fx" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.844566 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-config-data\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.845445 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.850043 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.856289 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tvw\" (UniqueName: \"kubernetes.io/projected/bb9df403-a190-484e-8b58-2e33e44ddd85-kube-api-access-45tvw\") pod \"watcher-api-0\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.858950 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.874314 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-44gls"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.874670 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.882554 5007 generic.go:334] "Generic (PLEG): container finished" podID="6146f8f3-f71e-4b93-9118-f003eefce996" containerID="d6689d4c7125ce3ff7d7e2d39743f1c52f63ce790d416c33c3777e77a471a6fd" exitCode=0 Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.882598 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68dccdd97-4vtx9"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.882620 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" event={"ID":"6146f8f3-f71e-4b93-9118-f003eefce996","Type":"ContainerDied","Data":"d6689d4c7125ce3ff7d7e2d39743f1c52f63ce790d416c33c3777e77a471a6fd"} Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.888090 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.894508 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v8nvk"] Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.911333 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1739350e-8a84-4c46-9bd2-7d98b9007dae-logs\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.911682 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s97f6\" (UniqueName: \"kubernetes.io/projected/8b4964c0-2913-437d-b391-ed607a8fa813-kube-api-access-s97f6\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.911709 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1739350e-8a84-4c46-9bd2-7d98b9007dae-horizon-secret-key\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.911741 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-combined-ca-bundle\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.911771 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-scripts\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.911827 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-config\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.911849 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-config-data\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.911880 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzbfp\" (UniqueName: \"kubernetes.io/projected/1739350e-8a84-4c46-9bd2-7d98b9007dae-kube-api-access-gzbfp\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.923656 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psr6p\" (UniqueName: \"kubernetes.io/projected/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-kube-api-access-psr6p\") pod \"watcher-decision-engine-0\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.926821 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-config\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.927189 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.944888 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-combined-ca-bundle\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:17 crc kubenswrapper[5007]: I0218 19:56:17.965747 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s97f6\" (UniqueName: \"kubernetes.io/projected/8b4964c0-2913-437d-b391-ed607a8fa813-kube-api-access-s97f6\") pod \"neutron-db-sync-44gls\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:17.986164 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-w57rb"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:17.987336 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w57rb"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:17.987416 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:17.996722 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tck47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:17.997009 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:17.997120 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.013800 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1739350e-8a84-4c46-9bd2-7d98b9007dae-logs\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.013875 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1739350e-8a84-4c46-9bd2-7d98b9007dae-horizon-secret-key\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.013952 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-scripts\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.014025 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-config-data\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.014056 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzbfp\" (UniqueName: \"kubernetes.io/projected/1739350e-8a84-4c46-9bd2-7d98b9007dae-kube-api-access-gzbfp\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.015453 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1739350e-8a84-4c46-9bd2-7d98b9007dae-logs\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.022932 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1739350e-8a84-4c46-9bd2-7d98b9007dae-horizon-secret-key\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.023896 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-scripts\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.025698 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-config-data\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.049984 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzbfp\" (UniqueName: \"kubernetes.io/projected/1739350e-8a84-4c46-9bd2-7d98b9007dae-kube-api-access-gzbfp\") pod \"horizon-68dccdd97-4vtx9\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.055651 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-spkvd"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.057172 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.064909 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.065947 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.072107 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689568cbb5-7vrl6"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.077092 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.078779 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-x89jd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.086639 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-spkvd"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.105191 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74c64d6b7c-xfb47"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.106710 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.117981 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.119855 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-db-sync-config-data\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.119957 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-config-data\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120013 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-combined-ca-bundle\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120035 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-combined-ca-bundle\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120081 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/757421ca-4bfc-42d6-ae73-bb479f3b5975-logs\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120114 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-scripts\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120133 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-config-data\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120155 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhn7\" (UniqueName: \"kubernetes.io/projected/c60b1e07-e980-450b-bfb8-c0210edeae7d-kube-api-access-grhn7\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120173 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c60b1e07-e980-450b-bfb8-c0210edeae7d-etc-machine-id\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120236 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcr8k\" (UniqueName: \"kubernetes.io/projected/757421ca-4bfc-42d6-ae73-bb479f3b5975-kube-api-access-xcr8k\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120255 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-scripts\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.120334 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.123659 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.123807 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.156620 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.213408 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74c64d6b7c-xfb47"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222659 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-svc\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222704 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222723 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-config\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222749 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcr8k\" (UniqueName: \"kubernetes.io/projected/757421ca-4bfc-42d6-ae73-bb479f3b5975-kube-api-access-xcr8k\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222805 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-scripts\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222835 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-nb\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222859 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngxj\" (UniqueName: \"kubernetes.io/projected/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-kube-api-access-hngxj\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222879 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-run-httpd\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222902 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-log-httpd\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222926 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-sb\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222951 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-db-sync-config-data\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.222968 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223006 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-config-data\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223032 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-scripts\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223049 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-config-data\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223082 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-combined-ca-bundle\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223099 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzph\" (UniqueName: \"kubernetes.io/projected/af8794a0-c909-4418-ab5a-a4d89492f37f-kube-api-access-mrzph\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223117 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-combined-ca-bundle\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223132 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/757421ca-4bfc-42d6-ae73-bb479f3b5975-logs\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223155 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-swift-storage-0\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223180 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-scripts\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223203 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-config-data\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223227 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhn7\" (UniqueName: \"kubernetes.io/projected/c60b1e07-e980-450b-bfb8-c0210edeae7d-kube-api-access-grhn7\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223251 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c60b1e07-e980-450b-bfb8-c0210edeae7d-etc-machine-id\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.223347 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c60b1e07-e980-450b-bfb8-c0210edeae7d-etc-machine-id\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.240018 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-db-sync-config-data\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.242268 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/757421ca-4bfc-42d6-ae73-bb479f3b5975-logs\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.242841 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-scripts\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.246467 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.248463 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-config-data\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.248928 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-combined-ca-bundle\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.249447 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-config-data\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.250023 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcr8k\" (UniqueName: \"kubernetes.io/projected/757421ca-4bfc-42d6-ae73-bb479f3b5975-kube-api-access-xcr8k\") pod \"placement-db-sync-spkvd\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.250488 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-scripts\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.261851 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-combined-ca-bundle\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.296655 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59b5956c5c-bhzjl"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.298060 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.315458 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59b5956c5c-bhzjl"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.317187 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhn7\" (UniqueName: \"kubernetes.io/projected/c60b1e07-e980-450b-bfb8-c0210edeae7d-kube-api-access-grhn7\") pod \"cinder-db-sync-w57rb\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.324866 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-svc\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.324911 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e42c854b-1439-4a01-9a89-c034c0272d32-horizon-secret-key\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.324936 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.324973 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-config\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.324998 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-config-data\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325018 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-nb\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325038 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hngxj\" (UniqueName: \"kubernetes.io/projected/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-kube-api-access-hngxj\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325070 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-run-httpd\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325088 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-scripts\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325105 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-log-httpd\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325129 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-sb\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325146 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325180 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-config-data\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325206 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-scripts\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325232 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42c854b-1439-4a01-9a89-c034c0272d32-logs\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325258 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49m6n\" (UniqueName: \"kubernetes.io/projected/e42c854b-1439-4a01-9a89-c034c0272d32-kube-api-access-49m6n\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325280 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzph\" (UniqueName: \"kubernetes.io/projected/af8794a0-c909-4418-ab5a-a4d89492f37f-kube-api-access-mrzph\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.325306 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-swift-storage-0\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.331037 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-swift-storage-0\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.332814 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-svc\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.332849 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-sb\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.333204 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-log-httpd\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.336412 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-nb\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.342036 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-config\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.342540 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-run-httpd\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.351240 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.354314 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.358603 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.359047 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.359192 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p65fx" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.359307 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.371988 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.383843 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-scripts\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.398891 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-config-data\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.412797 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-spkvd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.413405 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w57rb" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.416288 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.429170 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42c854b-1439-4a01-9a89-c034c0272d32-logs\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.429230 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49m6n\" (UniqueName: \"kubernetes.io/projected/e42c854b-1439-4a01-9a89-c034c0272d32-kube-api-access-49m6n\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.429234 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzph\" (UniqueName: \"kubernetes.io/projected/af8794a0-c909-4418-ab5a-a4d89492f37f-kube-api-access-mrzph\") pod \"ceilometer-0\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.432409 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42c854b-1439-4a01-9a89-c034c0272d32-logs\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.432840 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e42c854b-1439-4a01-9a89-c034c0272d32-horizon-secret-key\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.432914 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-config-data\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.432986 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-scripts\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.433639 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-scripts\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.443675 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-config-data\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.449478 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qz65k"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.450944 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.451933 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e42c854b-1439-4a01-9a89-c034c0272d32-horizon-secret-key\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.456109 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tgb24" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.456315 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.458565 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qz65k"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.482948 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hngxj\" (UniqueName: \"kubernetes.io/projected/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-kube-api-access-hngxj\") pod \"dnsmasq-dns-74c64d6b7c-xfb47\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.488017 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.497031 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49m6n\" (UniqueName: \"kubernetes.io/projected/e42c854b-1439-4a01-9a89-c034c0272d32-kube-api-access-49m6n\") pod \"horizon-59b5956c5c-bhzjl\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.534053 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-config-data\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.534088 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-logs\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.534134 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dh4d\" (UniqueName: \"kubernetes.io/projected/87bb0786-49a4-42a8-8010-bd3dfd2342e1-kube-api-access-9dh4d\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.534154 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.534171 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.534191 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.534214 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.534253 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-scripts\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.540103 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.556075 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689568cbb5-7vrl6"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.587971 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.589456 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.645829 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-scripts\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646278 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-combined-ca-bundle\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646409 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-config-data\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646490 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-logs\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646518 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-db-sync-config-data\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646581 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnk24\" (UniqueName: \"kubernetes.io/projected/2f906c05-1123-4e20-9792-a4930e299212-kube-api-access-tnk24\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646647 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dh4d\" (UniqueName: \"kubernetes.io/projected/87bb0786-49a4-42a8-8010-bd3dfd2342e1-kube-api-access-9dh4d\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646696 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646730 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646789 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.646851 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.654629 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-config-data\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.656301 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-logs\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.663724 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-scripts\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.663916 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.664708 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.664828 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.688273 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.700571 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dh4d\" (UniqueName: \"kubernetes.io/projected/87bb0786-49a4-42a8-8010-bd3dfd2342e1-kube-api-access-9dh4d\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.721895 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v8nvk"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.737062 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.759790 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-combined-ca-bundle\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.768652 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-combined-ca-bundle\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.771936 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-db-sync-config-data\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.772003 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnk24\" (UniqueName: \"kubernetes.io/projected/2f906c05-1123-4e20-9792-a4930e299212-kube-api-access-tnk24\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.784828 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-db-sync-config-data\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.788716 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.798563 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnk24\" (UniqueName: \"kubernetes.io/projected/2f906c05-1123-4e20-9792-a4930e299212-kube-api-access-tnk24\") pod \"barbican-db-sync-qz65k\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.851186 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:18 crc kubenswrapper[5007]: E0218 19:56:18.851635 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6146f8f3-f71e-4b93-9118-f003eefce996" containerName="dnsmasq-dns" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.851649 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6146f8f3-f71e-4b93-9118-f003eefce996" containerName="dnsmasq-dns" Feb 18 19:56:18 crc kubenswrapper[5007]: E0218 19:56:18.851675 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6146f8f3-f71e-4b93-9118-f003eefce996" containerName="init" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.851681 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6146f8f3-f71e-4b93-9118-f003eefce996" containerName="init" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.851834 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="6146f8f3-f71e-4b93-9118-f003eefce996" containerName="dnsmasq-dns" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.852798 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.862968 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.863675 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.863793 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.875903 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-config\") pod \"6146f8f3-f71e-4b93-9118-f003eefce996\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.876028 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-swift-storage-0\") pod \"6146f8f3-f71e-4b93-9118-f003eefce996\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.876135 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-sb\") pod \"6146f8f3-f71e-4b93-9118-f003eefce996\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.876245 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-svc\") pod \"6146f8f3-f71e-4b93-9118-f003eefce996\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.876315 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-nb\") pod \"6146f8f3-f71e-4b93-9118-f003eefce996\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.876460 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndv6g\" (UniqueName: \"kubernetes.io/projected/6146f8f3-f71e-4b93-9118-f003eefce996-kube-api-access-ndv6g\") pod \"6146f8f3-f71e-4b93-9118-f003eefce996\" (UID: \"6146f8f3-f71e-4b93-9118-f003eefce996\") " Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.902022 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6146f8f3-f71e-4b93-9118-f003eefce996-kube-api-access-ndv6g" (OuterVolumeSpecName: "kube-api-access-ndv6g") pod "6146f8f3-f71e-4b93-9118-f003eefce996" (UID: "6146f8f3-f71e-4b93-9118-f003eefce996"). InnerVolumeSpecName "kube-api-access-ndv6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.915739 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" event={"ID":"6146f8f3-f71e-4b93-9118-f003eefce996","Type":"ContainerDied","Data":"c70401ac822b6e1312c97284f9b176f3240ae601e478e30f65a88e3927d46b57"} Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.915771 5007 scope.go:117] "RemoveContainer" containerID="d6689d4c7125ce3ff7d7e2d39743f1c52f63ce790d416c33c3777e77a471a6fd" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.915852 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7984c7b7b9-n886b" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.941193 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v8nvk" event={"ID":"5ea25634-1f3e-47e6-9672-3273d7358d44","Type":"ContainerStarted","Data":"dbb95737ac6e4ab017d10889b8df66783f3b24c4511ee791df16bf8e807cbb11"} Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.948519 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6146f8f3-f71e-4b93-9118-f003eefce996" (UID: "6146f8f3-f71e-4b93-9118-f003eefce996"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.955131 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" event={"ID":"5bc12d79-93c4-44fe-becf-e048bb8554c9","Type":"ContainerStarted","Data":"438aebf45f7e464bc149905cdcaac18e9370b8c51297b911ad8e564c44e98b13"} Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.967218 5007 scope.go:117] "RemoveContainer" containerID="a6bfedb5a3fd06ae7671b186f0e3c85ed5f5951e951cb7ddaae58c8d1e094ba5" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.967696 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.978485 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.978518 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.978811 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lh2\" (UniqueName: \"kubernetes.io/projected/0bb7faf9-0f01-403d-9c59-c604a002ad4a-kube-api-access-86lh2\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.978839 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.978864 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.978945 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.979002 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.979036 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.979104 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.979120 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndv6g\" (UniqueName: \"kubernetes.io/projected/6146f8f3-f71e-4b93-9118-f003eefce996-kube-api-access-ndv6g\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.979297 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qz65k" Feb 18 19:56:18 crc kubenswrapper[5007]: I0218 19:56:18.989662 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6146f8f3-f71e-4b93-9118-f003eefce996" (UID: "6146f8f3-f71e-4b93-9118-f003eefce996"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.000261 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68dccdd97-4vtx9"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.018022 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6146f8f3-f71e-4b93-9118-f003eefce996" (UID: "6146f8f3-f71e-4b93-9118-f003eefce996"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.074348 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-config" (OuterVolumeSpecName: "config") pod "6146f8f3-f71e-4b93-9118-f003eefce996" (UID: "6146f8f3-f71e-4b93-9118-f003eefce996"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.087895 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088305 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088347 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088528 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088552 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088705 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86lh2\" (UniqueName: \"kubernetes.io/projected/0bb7faf9-0f01-403d-9c59-c604a002ad4a-kube-api-access-86lh2\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088730 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088783 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088873 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088885 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.088896 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.089042 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.090532 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.091060 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6146f8f3-f71e-4b93-9118-f003eefce996" (UID: "6146f8f3-f71e-4b93-9118-f003eefce996"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.093253 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.093381 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.093826 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.099700 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.102757 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.118897 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86lh2\" (UniqueName: \"kubernetes.io/projected/0bb7faf9-0f01-403d-9c59-c604a002ad4a-kube-api-access-86lh2\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.144652 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.192399 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6146f8f3-f71e-4b93-9118-f003eefce996-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.231466 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.257245 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.260839 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.289134 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-44gls"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.318035 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.367610 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-spkvd"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.407527 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7984c7b7b9-n886b"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.436715 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7984c7b7b9-n886b"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.453154 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w57rb"] Feb 18 19:56:19 crc kubenswrapper[5007]: W0218 19:56:19.513934 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b4964c0_2913_437d_b391_ed607a8fa813.slice/crio-2a2fdfa6c4cd9a4706c003e0922e6bb273bc831bf5ae9726c56766ee9b13cfd8 WatchSource:0}: Error finding container 2a2fdfa6c4cd9a4706c003e0922e6bb273bc831bf5ae9726c56766ee9b13cfd8: Status 404 returned error can't find the container with id 2a2fdfa6c4cd9a4706c003e0922e6bb273bc831bf5ae9726c56766ee9b13cfd8 Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.561882 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74c64d6b7c-xfb47"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.674259 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59b5956c5c-bhzjl"] Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.706198 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[5007]: W0218 19:56:19.710089 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42c854b_1439_4a01_9a89_c034c0272d32.slice/crio-bb45a0ed1f649f890541639f6b57d7f5eafbf15481916ce21f0ea7e3e84d6898 WatchSource:0}: Error finding container bb45a0ed1f649f890541639f6b57d7f5eafbf15481916ce21f0ea7e3e84d6898: Status 404 returned error can't find the container with id bb45a0ed1f649f890541639f6b57d7f5eafbf15481916ce21f0ea7e3e84d6898 Feb 18 19:56:19 crc kubenswrapper[5007]: W0218 19:56:19.734282 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8794a0_c909_4418_ab5a_a4d89492f37f.slice/crio-f788b687f207e24fea0a023c5aceea35fd4377b8b676338631f4ef96f18a3f07 WatchSource:0}: Error finding container f788b687f207e24fea0a023c5aceea35fd4377b8b676338631f4ef96f18a3f07: Status 404 returned error can't find the container with id f788b687f207e24fea0a023c5aceea35fd4377b8b676338631f4ef96f18a3f07 Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.767722 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.774756 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.919317 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6146f8f3-f71e-4b93-9118-f003eefce996" path="/var/lib/kubelet/pods/6146f8f3-f71e-4b93-9118-f003eefce996/volumes" Feb 18 19:56:19 crc kubenswrapper[5007]: I0218 19:56:19.983838 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qz65k"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.017628 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerStarted","Data":"10eb1444b59b9689a40715e0268b5af46b36215ea83e0e35da2c4b6dadec5aeb"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.027138 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-44gls" event={"ID":"8b4964c0-2913-437d-b391-ed607a8fa813","Type":"ContainerStarted","Data":"79598e84754a931ed805f0d30c9a1eaf7cc195ebf9c76595c65bdc3a437dbd07"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.027185 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-44gls" event={"ID":"8b4964c0-2913-437d-b391-ed607a8fa813","Type":"ContainerStarted","Data":"2a2fdfa6c4cd9a4706c003e0922e6bb273bc831bf5ae9726c56766ee9b13cfd8"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.035333 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"a6551271-9859-4f43-8a89-5c5da823ef56","Type":"ContainerStarted","Data":"ec7eadc2e8573af333a973e0ee4aebd80e5a10de4854279d0350118122f7ae43"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.038551 5007 generic.go:334] "Generic (PLEG): container finished" podID="5bc12d79-93c4-44fe-becf-e048bb8554c9" containerID="2cd13a0a27e879a4083193eb935b70fde4b10556ea6c84495cc1b2de011e818a" exitCode=0 Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.038613 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" event={"ID":"5bc12d79-93c4-44fe-becf-e048bb8554c9","Type":"ContainerDied","Data":"2cd13a0a27e879a4083193eb935b70fde4b10556ea6c84495cc1b2de011e818a"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.048444 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af8794a0-c909-4418-ab5a-a4d89492f37f","Type":"ContainerStarted","Data":"f788b687f207e24fea0a023c5aceea35fd4377b8b676338631f4ef96f18a3f07"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.052487 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bb9df403-a190-484e-8b58-2e33e44ddd85","Type":"ContainerStarted","Data":"0e51a2389cfcb3682abb26b5e33e0a9c720b90682c95d69cc66287d3a0c5857a"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.052526 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bb9df403-a190-484e-8b58-2e33e44ddd85","Type":"ContainerStarted","Data":"ffbcae0e6df3bda677f3aab38d351594be1d4ce36e5d77bbbfd4ff4277a5330e"} Feb 18 19:56:20 crc kubenswrapper[5007]: W0218 19:56:20.052578 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f906c05_1123_4e20_9792_a4930e299212.slice/crio-d38c8194881bf7338e6a3a6dda825c750a43ec6ba40d63386d12e0b081092b4b WatchSource:0}: Error finding container d38c8194881bf7338e6a3a6dda825c750a43ec6ba40d63386d12e0b081092b4b: Status 404 returned error can't find the container with id d38c8194881bf7338e6a3a6dda825c750a43ec6ba40d63386d12e0b081092b4b Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.055657 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.056354 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-spkvd" event={"ID":"757421ca-4bfc-42d6-ae73-bb479f3b5975","Type":"ContainerStarted","Data":"9cd83df916d197ce5e61bd4b073c9c7b72073990ae0fc697549a3c6768261b16"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.064658 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68dccdd97-4vtx9" event={"ID":"1739350e-8a84-4c46-9bd2-7d98b9007dae","Type":"ContainerStarted","Data":"7bdd226bb51991f955fdaef1c6f6a15a90aca18f50ca255eae46c721f0119730"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.070517 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-44gls" podStartSLOduration=3.070494033 podStartE2EDuration="3.070494033s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:20.047145708 +0000 UTC m=+1064.829265242" watchObservedRunningTime="2026-02-18 19:56:20.070494033 +0000 UTC m=+1064.852613557" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.074465 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v8nvk" event={"ID":"5ea25634-1f3e-47e6-9672-3273d7358d44","Type":"ContainerStarted","Data":"c8e5d89dd44e0b85f2e9f33e90d578d1171e1c5b57dbae8b4eb6f1eda88cf1cf"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.081214 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59b5956c5c-bhzjl" event={"ID":"e42c854b-1439-4a01-9a89-c034c0272d32","Type":"ContainerStarted","Data":"bb45a0ed1f649f890541639f6b57d7f5eafbf15481916ce21f0ea7e3e84d6898"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.082935 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w57rb" event={"ID":"c60b1e07-e980-450b-bfb8-c0210edeae7d","Type":"ContainerStarted","Data":"ce67be603610dcb020a8504ed0611a033eb0497cbd83a010f080c2cbc4490fa4"} Feb 18 19:56:20 crc kubenswrapper[5007]: W0218 19:56:20.086224 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87bb0786_49a4_42a8_8010_bd3dfd2342e1.slice/crio-48cf922ced55928ae454d91ed14bd2b138be0cedd24bea336716c1624314a83f WatchSource:0}: Error finding container 48cf922ced55928ae454d91ed14bd2b138be0cedd24bea336716c1624314a83f: Status 404 returned error can't find the container with id 48cf922ced55928ae454d91ed14bd2b138be0cedd24bea336716c1624314a83f Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.089386 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" event={"ID":"f87fc392-f98a-45e5-8f18-c3efbdf19e1a","Type":"ContainerStarted","Data":"e53dda0f0fc8f8f63490a7b8167319b855ae29ddc6ac455800737d5f36018e5b"} Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.109308 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.150408 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-v8nvk" podStartSLOduration=3.150388028 podStartE2EDuration="3.150388028s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:20.088440074 +0000 UTC m=+1064.870559608" watchObservedRunningTime="2026-02-18 19:56:20.150388028 +0000 UTC m=+1064.932507542" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.335559 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.480782 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.499545 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.599527 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59b5956c5c-bhzjl"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.613205 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.633157 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.653156 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68b6f5f89-58c25"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.654929 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.669852 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68b6f5f89-58c25"] Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.778610 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-config-data\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.778670 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltr8\" (UniqueName: \"kubernetes.io/projected/203ba913-bbde-4801-b85b-b472aef6a7f6-kube-api-access-jltr8\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.778694 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ba913-bbde-4801-b85b-b472aef6a7f6-logs\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.778807 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203ba913-bbde-4801-b85b-b472aef6a7f6-horizon-secret-key\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.778871 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-scripts\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.820365 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.879625 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-config\") pod \"5bc12d79-93c4-44fe-becf-e048bb8554c9\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.879715 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-swift-storage-0\") pod \"5bc12d79-93c4-44fe-becf-e048bb8554c9\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.879822 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-svc\") pod \"5bc12d79-93c4-44fe-becf-e048bb8554c9\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.879866 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-nb\") pod \"5bc12d79-93c4-44fe-becf-e048bb8554c9\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.879958 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-sb\") pod \"5bc12d79-93c4-44fe-becf-e048bb8554c9\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.879989 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcjhn\" (UniqueName: \"kubernetes.io/projected/5bc12d79-93c4-44fe-becf-e048bb8554c9-kube-api-access-zcjhn\") pod \"5bc12d79-93c4-44fe-becf-e048bb8554c9\" (UID: \"5bc12d79-93c4-44fe-becf-e048bb8554c9\") " Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.881346 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-config-data\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.881406 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltr8\" (UniqueName: \"kubernetes.io/projected/203ba913-bbde-4801-b85b-b472aef6a7f6-kube-api-access-jltr8\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.881468 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ba913-bbde-4801-b85b-b472aef6a7f6-logs\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.881601 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203ba913-bbde-4801-b85b-b472aef6a7f6-horizon-secret-key\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.881669 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-scripts\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.882296 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-scripts\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.882808 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ba913-bbde-4801-b85b-b472aef6a7f6-logs\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.885336 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-config-data\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.907648 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203ba913-bbde-4801-b85b-b472aef6a7f6-horizon-secret-key\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.913642 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc12d79-93c4-44fe-becf-e048bb8554c9-kube-api-access-zcjhn" (OuterVolumeSpecName: "kube-api-access-zcjhn") pod "5bc12d79-93c4-44fe-becf-e048bb8554c9" (UID: "5bc12d79-93c4-44fe-becf-e048bb8554c9"). InnerVolumeSpecName "kube-api-access-zcjhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.922687 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-config" (OuterVolumeSpecName: "config") pod "5bc12d79-93c4-44fe-becf-e048bb8554c9" (UID: "5bc12d79-93c4-44fe-becf-e048bb8554c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.923836 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltr8\" (UniqueName: \"kubernetes.io/projected/203ba913-bbde-4801-b85b-b472aef6a7f6-kube-api-access-jltr8\") pod \"horizon-68b6f5f89-58c25\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.937711 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5bc12d79-93c4-44fe-becf-e048bb8554c9" (UID: "5bc12d79-93c4-44fe-becf-e048bb8554c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.941051 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5bc12d79-93c4-44fe-becf-e048bb8554c9" (UID: "5bc12d79-93c4-44fe-becf-e048bb8554c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.942272 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5bc12d79-93c4-44fe-becf-e048bb8554c9" (UID: "5bc12d79-93c4-44fe-becf-e048bb8554c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.953027 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bc12d79-93c4-44fe-becf-e048bb8554c9" (UID: "5bc12d79-93c4-44fe-becf-e048bb8554c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.983635 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.983662 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.983675 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.983685 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcjhn\" (UniqueName: \"kubernetes.io/projected/5bc12d79-93c4-44fe-becf-e048bb8554c9-kube-api-access-zcjhn\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.983694 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:20 crc kubenswrapper[5007]: I0218 19:56:20.983701 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bc12d79-93c4-44fe-becf-e048bb8554c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.013934 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.166946 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qz65k" event={"ID":"2f906c05-1123-4e20-9792-a4930e299212","Type":"ContainerStarted","Data":"d38c8194881bf7338e6a3a6dda825c750a43ec6ba40d63386d12e0b081092b4b"} Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.173065 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0bb7faf9-0f01-403d-9c59-c604a002ad4a","Type":"ContainerStarted","Data":"835f2546213125f9b585ee6b101ec1f683c7a9093e0019f8695c16aa839ba80a"} Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.191387 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" event={"ID":"5bc12d79-93c4-44fe-becf-e048bb8554c9","Type":"ContainerDied","Data":"438aebf45f7e464bc149905cdcaac18e9370b8c51297b911ad8e564c44e98b13"} Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.191455 5007 scope.go:117] "RemoveContainer" containerID="2cd13a0a27e879a4083193eb935b70fde4b10556ea6c84495cc1b2de011e818a" Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.191605 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689568cbb5-7vrl6" Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.236601 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"87bb0786-49a4-42a8-8010-bd3dfd2342e1","Type":"ContainerStarted","Data":"48cf922ced55928ae454d91ed14bd2b138be0cedd24bea336716c1624314a83f"} Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.255283 5007 generic.go:334] "Generic (PLEG): container finished" podID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerID="ff2e81c9fdf1c045532e6e08cb2aad57211eaa2b740975edfd742066085da8ee" exitCode=0 Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.255335 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" event={"ID":"f87fc392-f98a-45e5-8f18-c3efbdf19e1a","Type":"ContainerDied","Data":"ff2e81c9fdf1c045532e6e08cb2aad57211eaa2b740975edfd742066085da8ee"} Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.255359 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" event={"ID":"f87fc392-f98a-45e5-8f18-c3efbdf19e1a","Type":"ContainerStarted","Data":"e003b9b54791a8b096007fc8dd05ff4f0c648357d5cfe3bdccbd884cd1bdc921"} Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.258961 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.269587 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bb9df403-a190-484e-8b58-2e33e44ddd85","Type":"ContainerStarted","Data":"d089007680c6ecad73ea0412e763e1d4a6b3943d4af97d825102aac7e899c8a9"} Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.272967 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.366787 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689568cbb5-7vrl6"] Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.375096 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-689568cbb5-7vrl6"] Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.378318 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" podStartSLOduration=4.378299185 podStartE2EDuration="4.378299185s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:21.314087237 +0000 UTC m=+1066.096206761" watchObservedRunningTime="2026-02-18 19:56:21.378299185 +0000 UTC m=+1066.160418709" Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.386395 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.386376305 podStartE2EDuration="4.386376305s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:21.347890149 +0000 UTC m=+1066.130009673" watchObservedRunningTime="2026-02-18 19:56:21.386376305 +0000 UTC m=+1066.168495829" Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.681267 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68b6f5f89-58c25"] Feb 18 19:56:21 crc kubenswrapper[5007]: W0218 19:56:21.711355 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod203ba913_bbde_4801_b85b_b472aef6a7f6.slice/crio-85d87c943020108847b49cde7f55e72a1360186524eff4afe4b7614b420a5be6 WatchSource:0}: Error finding container 85d87c943020108847b49cde7f55e72a1360186524eff4afe4b7614b420a5be6: Status 404 returned error can't find the container with id 85d87c943020108847b49cde7f55e72a1360186524eff4afe4b7614b420a5be6 Feb 18 19:56:21 crc kubenswrapper[5007]: I0218 19:56:21.943862 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc12d79-93c4-44fe-becf-e048bb8554c9" path="/var/lib/kubelet/pods/5bc12d79-93c4-44fe-becf-e048bb8554c9/volumes" Feb 18 19:56:22 crc kubenswrapper[5007]: I0218 19:56:22.283067 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"87bb0786-49a4-42a8-8010-bd3dfd2342e1","Type":"ContainerStarted","Data":"03cc1596a683f627848e7a3115a32f4aa1137634fcee1a5d666548ecdcab17df"} Feb 18 19:56:22 crc kubenswrapper[5007]: I0218 19:56:22.285302 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api-log" containerID="cri-o://0e51a2389cfcb3682abb26b5e33e0a9c720b90682c95d69cc66287d3a0c5857a" gracePeriod=30 Feb 18 19:56:22 crc kubenswrapper[5007]: I0218 19:56:22.285676 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b6f5f89-58c25" event={"ID":"203ba913-bbde-4801-b85b-b472aef6a7f6","Type":"ContainerStarted","Data":"85d87c943020108847b49cde7f55e72a1360186524eff4afe4b7614b420a5be6"} Feb 18 19:56:22 crc kubenswrapper[5007]: I0218 19:56:22.288344 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" containerID="cri-o://d089007680c6ecad73ea0412e763e1d4a6b3943d4af97d825102aac7e899c8a9" gracePeriod=30 Feb 18 19:56:22 crc kubenswrapper[5007]: I0218 19:56:22.297658 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": EOF" Feb 18 19:56:22 crc kubenswrapper[5007]: I0218 19:56:22.888642 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:56:23 crc kubenswrapper[5007]: I0218 19:56:23.312742 5007 generic.go:334] "Generic (PLEG): container finished" podID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerID="0e51a2389cfcb3682abb26b5e33e0a9c720b90682c95d69cc66287d3a0c5857a" exitCode=143 Feb 18 19:56:23 crc kubenswrapper[5007]: I0218 19:56:23.312807 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bb9df403-a190-484e-8b58-2e33e44ddd85","Type":"ContainerDied","Data":"0e51a2389cfcb3682abb26b5e33e0a9c720b90682c95d69cc66287d3a0c5857a"} Feb 18 19:56:23 crc kubenswrapper[5007]: I0218 19:56:23.317389 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0bb7faf9-0f01-403d-9c59-c604a002ad4a","Type":"ContainerStarted","Data":"1c44635720ed40aa8f5176fcdeb52628db658cb7ac417887edf2e5224e5fbbbe"} Feb 18 19:56:23 crc kubenswrapper[5007]: I0218 19:56:23.324946 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"87bb0786-49a4-42a8-8010-bd3dfd2342e1","Type":"ContainerStarted","Data":"85cf037b74cd9a1b7cf6d974c57e79a581fed2e62aded54514c7d1635a5173ae"} Feb 18 19:56:23 crc kubenswrapper[5007]: I0218 19:56:23.325086 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerName="glance-log" containerID="cri-o://03cc1596a683f627848e7a3115a32f4aa1137634fcee1a5d666548ecdcab17df" gracePeriod=30 Feb 18 19:56:23 crc kubenswrapper[5007]: I0218 19:56:23.325919 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerName="glance-httpd" containerID="cri-o://85cf037b74cd9a1b7cf6d974c57e79a581fed2e62aded54514c7d1635a5173ae" gracePeriod=30 Feb 18 19:56:23 crc kubenswrapper[5007]: I0218 19:56:23.357662 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.35763987 podStartE2EDuration="5.35763987s" podCreationTimestamp="2026-02-18 19:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:23.355080497 +0000 UTC m=+1068.137200021" watchObservedRunningTime="2026-02-18 19:56:23.35763987 +0000 UTC m=+1068.139759394" Feb 18 19:56:24 crc kubenswrapper[5007]: I0218 19:56:24.387759 5007 generic.go:334] "Generic (PLEG): container finished" podID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerID="85cf037b74cd9a1b7cf6d974c57e79a581fed2e62aded54514c7d1635a5173ae" exitCode=143 Feb 18 19:56:24 crc kubenswrapper[5007]: I0218 19:56:24.388197 5007 generic.go:334] "Generic (PLEG): container finished" podID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerID="03cc1596a683f627848e7a3115a32f4aa1137634fcee1a5d666548ecdcab17df" exitCode=143 Feb 18 19:56:24 crc kubenswrapper[5007]: I0218 19:56:24.388224 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"87bb0786-49a4-42a8-8010-bd3dfd2342e1","Type":"ContainerDied","Data":"85cf037b74cd9a1b7cf6d974c57e79a581fed2e62aded54514c7d1635a5173ae"} Feb 18 19:56:24 crc kubenswrapper[5007]: I0218 19:56:24.388255 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"87bb0786-49a4-42a8-8010-bd3dfd2342e1","Type":"ContainerDied","Data":"03cc1596a683f627848e7a3115a32f4aa1137634fcee1a5d666548ecdcab17df"} Feb 18 19:56:25 crc kubenswrapper[5007]: I0218 19:56:25.048995 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": read tcp 10.217.0.2:39460->10.217.0.150:9322: read: connection reset by peer" Feb 18 19:56:25 crc kubenswrapper[5007]: I0218 19:56:25.049993 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": dial tcp 10.217.0.150:9322: connect: connection refused" Feb 18 19:56:25 crc kubenswrapper[5007]: I0218 19:56:25.402102 5007 generic.go:334] "Generic (PLEG): container finished" podID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerID="d089007680c6ecad73ea0412e763e1d4a6b3943d4af97d825102aac7e899c8a9" exitCode=0 Feb 18 19:56:25 crc kubenswrapper[5007]: I0218 19:56:25.402146 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bb9df403-a190-484e-8b58-2e33e44ddd85","Type":"ContainerDied","Data":"d089007680c6ecad73ea0412e763e1d4a6b3943d4af97d825102aac7e899c8a9"} Feb 18 19:56:26 crc kubenswrapper[5007]: I0218 19:56:26.415324 5007 generic.go:334] "Generic (PLEG): container finished" podID="5ea25634-1f3e-47e6-9672-3273d7358d44" containerID="c8e5d89dd44e0b85f2e9f33e90d578d1171e1c5b57dbae8b4eb6f1eda88cf1cf" exitCode=0 Feb 18 19:56:26 crc kubenswrapper[5007]: I0218 19:56:26.415615 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v8nvk" event={"ID":"5ea25634-1f3e-47e6-9672-3273d7358d44","Type":"ContainerDied","Data":"c8e5d89dd44e0b85f2e9f33e90d578d1171e1c5b57dbae8b4eb6f1eda88cf1cf"} Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.111764 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68dccdd97-4vtx9"] Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.155001 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7957464b6-627s5"] Feb 18 19:56:27 crc kubenswrapper[5007]: E0218 19:56:27.158786 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc12d79-93c4-44fe-becf-e048bb8554c9" containerName="init" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.158810 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc12d79-93c4-44fe-becf-e048bb8554c9" containerName="init" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.159006 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc12d79-93c4-44fe-becf-e048bb8554c9" containerName="init" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.194483 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7957464b6-627s5"] Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.195344 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.199013 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.264608 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68b6f5f89-58c25"] Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.278492 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-784db9c5db-lr9xh"] Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.280079 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.284787 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-tls-certs\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.284889 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-combined-ca-bundle\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.284956 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-config-data\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.284977 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b695b866-e754-46f6-8cd8-ef152bfc1a54-logs\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.285029 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-secret-key\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.285046 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmqtt\" (UniqueName: \"kubernetes.io/projected/b695b866-e754-46f6-8cd8-ef152bfc1a54-kube-api-access-lmqtt\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.285106 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-scripts\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.294292 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-784db9c5db-lr9xh"] Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.387530 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-logs\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.387928 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-tls-certs\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.387972 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-combined-ca-bundle\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388014 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-combined-ca-bundle\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388052 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-config-data\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388099 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b695b866-e754-46f6-8cd8-ef152bfc1a54-logs\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388140 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-scripts\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388206 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-secret-key\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388299 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmqtt\" (UniqueName: \"kubernetes.io/projected/b695b866-e754-46f6-8cd8-ef152bfc1a54-kube-api-access-lmqtt\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388377 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-horizon-secret-key\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388479 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-scripts\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388504 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9nq4\" (UniqueName: \"kubernetes.io/projected/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-kube-api-access-q9nq4\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388549 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-horizon-tls-certs\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388576 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-config-data\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.388878 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b695b866-e754-46f6-8cd8-ef152bfc1a54-logs\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.389541 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-scripts\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.390460 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-config-data\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.396112 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-combined-ca-bundle\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.396362 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-secret-key\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.402606 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-tls-certs\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.408132 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmqtt\" (UniqueName: \"kubernetes.io/projected/b695b866-e754-46f6-8cd8-ef152bfc1a54-kube-api-access-lmqtt\") pod \"horizon-7957464b6-627s5\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.491181 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-horizon-tls-certs\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.492568 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-config-data\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.492730 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-logs\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.492810 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-combined-ca-bundle\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.492901 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-scripts\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.493032 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-horizon-secret-key\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.493138 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9nq4\" (UniqueName: \"kubernetes.io/projected/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-kube-api-access-q9nq4\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.495345 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-logs\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.496455 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-horizon-tls-certs\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.496525 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-config-data\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.496589 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-combined-ca-bundle\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.496679 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-scripts\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.504918 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-horizon-secret-key\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.510808 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9nq4\" (UniqueName: \"kubernetes.io/projected/4c312663-7b4a-468e-a0f8-62c9ef9e3fc2-kube-api-access-q9nq4\") pod \"horizon-784db9c5db-lr9xh\" (UID: \"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2\") " pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.522333 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.603176 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:27 crc kubenswrapper[5007]: I0218 19:56:27.889989 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": dial tcp 10.217.0.150:9322: connect: connection refused" Feb 18 19:56:28 crc kubenswrapper[5007]: I0218 19:56:28.542381 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:56:28 crc kubenswrapper[5007]: I0218 19:56:28.606456 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75fb45b9c-prmzx"] Feb 18 19:56:28 crc kubenswrapper[5007]: I0218 19:56:28.606676 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="dnsmasq-dns" containerID="cri-o://9ba94a3f48a4ea3c1686b42193961341f636681296c9030bdf268c1d9b5034d2" gracePeriod=10 Feb 18 19:56:29 crc kubenswrapper[5007]: I0218 19:56:29.446726 5007 generic.go:334] "Generic (PLEG): container finished" podID="5f8c2991-3116-46e0-b302-397291db1589" containerID="9ba94a3f48a4ea3c1686b42193961341f636681296c9030bdf268c1d9b5034d2" exitCode=0 Feb 18 19:56:29 crc kubenswrapper[5007]: I0218 19:56:29.447048 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" event={"ID":"5f8c2991-3116-46e0-b302-397291db1589","Type":"ContainerDied","Data":"9ba94a3f48a4ea3c1686b42193961341f636681296c9030bdf268c1d9b5034d2"} Feb 18 19:56:32 crc kubenswrapper[5007]: I0218 19:56:32.889287 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": dial tcp 10.217.0.150:9322: connect: connection refused" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.093144 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.250259 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-combined-ca-bundle\") pod \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.250316 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-config-data\") pod \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.250355 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-httpd-run\") pod \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.250388 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dh4d\" (UniqueName: \"kubernetes.io/projected/87bb0786-49a4-42a8-8010-bd3dfd2342e1-kube-api-access-9dh4d\") pod \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.250467 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.250489 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-public-tls-certs\") pod \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.250648 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-logs\") pod \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.250694 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-scripts\") pod \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\" (UID: \"87bb0786-49a4-42a8-8010-bd3dfd2342e1\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.254115 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "87bb0786-49a4-42a8-8010-bd3dfd2342e1" (UID: "87bb0786-49a4-42a8-8010-bd3dfd2342e1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.254693 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-logs" (OuterVolumeSpecName: "logs") pod "87bb0786-49a4-42a8-8010-bd3dfd2342e1" (UID: "87bb0786-49a4-42a8-8010-bd3dfd2342e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.255918 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-scripts" (OuterVolumeSpecName: "scripts") pod "87bb0786-49a4-42a8-8010-bd3dfd2342e1" (UID: "87bb0786-49a4-42a8-8010-bd3dfd2342e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.263577 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "87bb0786-49a4-42a8-8010-bd3dfd2342e1" (UID: "87bb0786-49a4-42a8-8010-bd3dfd2342e1"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.267758 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bb0786-49a4-42a8-8010-bd3dfd2342e1-kube-api-access-9dh4d" (OuterVolumeSpecName: "kube-api-access-9dh4d") pod "87bb0786-49a4-42a8-8010-bd3dfd2342e1" (UID: "87bb0786-49a4-42a8-8010-bd3dfd2342e1"). InnerVolumeSpecName "kube-api-access-9dh4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.296546 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87bb0786-49a4-42a8-8010-bd3dfd2342e1" (UID: "87bb0786-49a4-42a8-8010-bd3dfd2342e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.302503 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-config-data" (OuterVolumeSpecName: "config-data") pod "87bb0786-49a4-42a8-8010-bd3dfd2342e1" (UID: "87bb0786-49a4-42a8-8010-bd3dfd2342e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.303840 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "87bb0786-49a4-42a8-8010-bd3dfd2342e1" (UID: "87bb0786-49a4-42a8-8010-bd3dfd2342e1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.355453 5007 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.355490 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dh4d\" (UniqueName: \"kubernetes.io/projected/87bb0786-49a4-42a8-8010-bd3dfd2342e1-kube-api-access-9dh4d\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.355529 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.355542 5007 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.355556 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bb0786-49a4-42a8-8010-bd3dfd2342e1-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.355566 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.355576 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.355587 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bb0786-49a4-42a8-8010-bd3dfd2342e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.375698 5007 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.457034 5007 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: E0218 19:56:35.480920 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 18 19:56:35 crc kubenswrapper[5007]: E0218 19:56:35.480971 5007 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 18 19:56:35 crc kubenswrapper[5007]: E0218 19:56:35.481159 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.80:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cbhdfh7chc8h576h89h5ch69h5fdhd4h64dh674h59h9ch569h5d7h597h599h5dh5cbh679h98hc5h59h5bbh567h568hddh68h558h56ch9cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrzph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(af8794a0-c909-4418-ab5a-a4d89492f37f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.514927 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v8nvk" event={"ID":"5ea25634-1f3e-47e6-9672-3273d7358d44","Type":"ContainerDied","Data":"dbb95737ac6e4ab017d10889b8df66783f3b24c4511ee791df16bf8e807cbb11"} Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.514977 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbb95737ac6e4ab017d10889b8df66783f3b24c4511ee791df16bf8e807cbb11" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.518173 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.520160 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"87bb0786-49a4-42a8-8010-bd3dfd2342e1","Type":"ContainerDied","Data":"48cf922ced55928ae454d91ed14bd2b138be0cedd24bea336716c1624314a83f"} Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.520212 5007 scope.go:117] "RemoveContainer" containerID="85cf037b74cd9a1b7cf6d974c57e79a581fed2e62aded54514c7d1635a5173ae" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.520213 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.574625 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.588698 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.605391 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:35 crc kubenswrapper[5007]: E0218 19:56:35.605833 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea25634-1f3e-47e6-9672-3273d7358d44" containerName="keystone-bootstrap" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.605851 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea25634-1f3e-47e6-9672-3273d7358d44" containerName="keystone-bootstrap" Feb 18 19:56:35 crc kubenswrapper[5007]: E0218 19:56:35.605884 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerName="glance-log" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.605891 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerName="glance-log" Feb 18 19:56:35 crc kubenswrapper[5007]: E0218 19:56:35.605904 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerName="glance-httpd" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.605911 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerName="glance-httpd" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.611272 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea25634-1f3e-47e6-9672-3273d7358d44" containerName="keystone-bootstrap" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.611312 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerName="glance-httpd" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.611330 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" containerName="glance-log" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.612538 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.617095 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.622654 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.624399 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.660008 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-fernet-keys\") pod \"5ea25634-1f3e-47e6-9672-3273d7358d44\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.660162 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4lxw\" (UniqueName: \"kubernetes.io/projected/5ea25634-1f3e-47e6-9672-3273d7358d44-kube-api-access-k4lxw\") pod \"5ea25634-1f3e-47e6-9672-3273d7358d44\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.660246 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-scripts\") pod \"5ea25634-1f3e-47e6-9672-3273d7358d44\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.660319 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-combined-ca-bundle\") pod \"5ea25634-1f3e-47e6-9672-3273d7358d44\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.660365 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-credential-keys\") pod \"5ea25634-1f3e-47e6-9672-3273d7358d44\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.660402 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-config-data\") pod \"5ea25634-1f3e-47e6-9672-3273d7358d44\" (UID: \"5ea25634-1f3e-47e6-9672-3273d7358d44\") " Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.671978 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5ea25634-1f3e-47e6-9672-3273d7358d44" (UID: "5ea25634-1f3e-47e6-9672-3273d7358d44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.671992 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-scripts" (OuterVolumeSpecName: "scripts") pod "5ea25634-1f3e-47e6-9672-3273d7358d44" (UID: "5ea25634-1f3e-47e6-9672-3273d7358d44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.672034 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea25634-1f3e-47e6-9672-3273d7358d44-kube-api-access-k4lxw" (OuterVolumeSpecName: "kube-api-access-k4lxw") pod "5ea25634-1f3e-47e6-9672-3273d7358d44" (UID: "5ea25634-1f3e-47e6-9672-3273d7358d44"). InnerVolumeSpecName "kube-api-access-k4lxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.673604 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5ea25634-1f3e-47e6-9672-3273d7358d44" (UID: "5ea25634-1f3e-47e6-9672-3273d7358d44"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.706166 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ea25634-1f3e-47e6-9672-3273d7358d44" (UID: "5ea25634-1f3e-47e6-9672-3273d7358d44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.719172 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-config-data" (OuterVolumeSpecName: "config-data") pod "5ea25634-1f3e-47e6-9672-3273d7358d44" (UID: "5ea25634-1f3e-47e6-9672-3273d7358d44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.765551 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.765595 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-logs\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.765619 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.765652 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.765718 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.765739 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwhw\" (UniqueName: \"kubernetes.io/projected/fcdb880d-97da-4651-983e-8ccef0ea3d51-kube-api-access-kzwhw\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.765767 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.766593 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.766841 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.766859 5007 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.766872 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4lxw\" (UniqueName: \"kubernetes.io/projected/5ea25634-1f3e-47e6-9672-3273d7358d44-kube-api-access-k4lxw\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.766885 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.766894 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.766902 5007 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ea25634-1f3e-47e6-9672-3273d7358d44-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.868276 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.868374 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.868395 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwhw\" (UniqueName: \"kubernetes.io/projected/fcdb880d-97da-4651-983e-8ccef0ea3d51-kube-api-access-kzwhw\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.868448 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.868514 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.868534 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.868551 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-logs\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.868569 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.869127 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.870026 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.870377 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-logs\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.871377 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.871759 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.873204 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.873867 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.886092 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.886310 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwhw\" (UniqueName: \"kubernetes.io/projected/fcdb880d-97da-4651-983e-8ccef0ea3d51-kube-api-access-kzwhw\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.890675 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.922350 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " pod="openstack/glance-default-external-api-0" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.924821 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bb0786-49a4-42a8-8010-bd3dfd2342e1" path="/var/lib/kubelet/pods/87bb0786-49a4-42a8-8010-bd3dfd2342e1/volumes" Feb 18 19:56:35 crc kubenswrapper[5007]: I0218 19:56:35.946372 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.235905 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.527472 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v8nvk" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.705376 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-v8nvk"] Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.719054 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-v8nvk"] Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.794721 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zljlc"] Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.797031 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.799800 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.802068 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.802481 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.802548 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sd9ml" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.805920 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.806718 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zljlc"] Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.989477 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-credential-keys\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.989530 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-combined-ca-bundle\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.989555 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk6qp\" (UniqueName: \"kubernetes.io/projected/ae6b2577-67ac-4c20-8242-e9533bab124d-kube-api-access-qk6qp\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.989582 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-fernet-keys\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.989741 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-config-data\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:36 crc kubenswrapper[5007]: I0218 19:56:36.989758 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-scripts\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.091565 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-credential-keys\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.091628 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-combined-ca-bundle\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.091652 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk6qp\" (UniqueName: \"kubernetes.io/projected/ae6b2577-67ac-4c20-8242-e9533bab124d-kube-api-access-qk6qp\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.091676 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-fernet-keys\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.091782 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-config-data\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.091799 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-scripts\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.101052 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-scripts\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.101390 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-credential-keys\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.101441 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-combined-ca-bundle\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.101486 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-fernet-keys\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.103560 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-config-data\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.106931 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk6qp\" (UniqueName: \"kubernetes.io/projected/ae6b2577-67ac-4c20-8242-e9533bab124d-kube-api-access-qk6qp\") pod \"keystone-bootstrap-zljlc\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.124264 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:56:37 crc kubenswrapper[5007]: I0218 19:56:37.928775 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea25634-1f3e-47e6-9672-3273d7358d44" path="/var/lib/kubelet/pods/5ea25634-1f3e-47e6-9672-3273d7358d44/volumes" Feb 18 19:56:38 crc kubenswrapper[5007]: E0218 19:56:38.756727 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Feb 18 19:56:38 crc kubenswrapper[5007]: E0218 19:56:38.757077 5007 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Feb 18 19:56:38 crc kubenswrapper[5007]: E0218 19:56:38.757307 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.80:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcr8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-spkvd_openstack(757421ca-4bfc-42d6-ae73-bb479f3b5975): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:56:38 crc kubenswrapper[5007]: E0218 19:56:38.758631 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-spkvd" podUID="757421ca-4bfc-42d6-ae73-bb479f3b5975" Feb 18 19:56:39 crc kubenswrapper[5007]: E0218 19:56:39.553751 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-spkvd" podUID="757421ca-4bfc-42d6-ae73-bb479f3b5975" Feb 18 19:56:41 crc kubenswrapper[5007]: I0218 19:56:41.237065 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Feb 18 19:56:42 crc kubenswrapper[5007]: I0218 19:56:42.889232 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:44 crc kubenswrapper[5007]: I0218 19:56:44.612982 5007 generic.go:334] "Generic (PLEG): container finished" podID="8b4964c0-2913-437d-b391-ed607a8fa813" containerID="79598e84754a931ed805f0d30c9a1eaf7cc195ebf9c76595c65bdc3a437dbd07" exitCode=0 Feb 18 19:56:44 crc kubenswrapper[5007]: I0218 19:56:44.613029 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-44gls" event={"ID":"8b4964c0-2913-437d-b391-ed607a8fa813","Type":"ContainerDied","Data":"79598e84754a931ed805f0d30c9a1eaf7cc195ebf9c76595c65bdc3a437dbd07"} Feb 18 19:56:46 crc kubenswrapper[5007]: I0218 19:56:46.237614 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Feb 18 19:56:46 crc kubenswrapper[5007]: I0218 19:56:46.238117 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.571256 5007 scope.go:117] "RemoveContainer" containerID="03cc1596a683f627848e7a3115a32f4aa1137634fcee1a5d666548ecdcab17df" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.657613 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bb9df403-a190-484e-8b58-2e33e44ddd85","Type":"ContainerDied","Data":"ffbcae0e6df3bda677f3aab38d351594be1d4ce36e5d77bbbfd4ff4277a5330e"} Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.657668 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffbcae0e6df3bda677f3aab38d351594be1d4ce36e5d77bbbfd4ff4277a5330e" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.665390 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-44gls" event={"ID":"8b4964c0-2913-437d-b391-ed607a8fa813","Type":"ContainerDied","Data":"2a2fdfa6c4cd9a4706c003e0922e6bb273bc831bf5ae9726c56766ee9b13cfd8"} Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.665450 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a2fdfa6c4cd9a4706c003e0922e6bb273bc831bf5ae9726c56766ee9b13cfd8" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.665458 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.669862 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" event={"ID":"5f8c2991-3116-46e0-b302-397291db1589","Type":"ContainerDied","Data":"a4458f99e4627d4de4ffaf2f7767b5af27be431456a47a8bb54f967605cd4b8a"} Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.671603 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.678379 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829196 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-nb\") pod \"5f8c2991-3116-46e0-b302-397291db1589\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829251 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-custom-prometheus-ca\") pod \"bb9df403-a190-484e-8b58-2e33e44ddd85\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829279 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45tvw\" (UniqueName: \"kubernetes.io/projected/bb9df403-a190-484e-8b58-2e33e44ddd85-kube-api-access-45tvw\") pod \"bb9df403-a190-484e-8b58-2e33e44ddd85\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829298 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-config-data\") pod \"bb9df403-a190-484e-8b58-2e33e44ddd85\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829316 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s97f6\" (UniqueName: \"kubernetes.io/projected/8b4964c0-2913-437d-b391-ed607a8fa813-kube-api-access-s97f6\") pod \"8b4964c0-2913-437d-b391-ed607a8fa813\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829342 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-swift-storage-0\") pod \"5f8c2991-3116-46e0-b302-397291db1589\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829372 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-svc\") pod \"5f8c2991-3116-46e0-b302-397291db1589\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829469 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-combined-ca-bundle\") pod \"8b4964c0-2913-437d-b391-ed607a8fa813\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829507 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-config\") pod \"5f8c2991-3116-46e0-b302-397291db1589\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829528 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-sb\") pod \"5f8c2991-3116-46e0-b302-397291db1589\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829546 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpmck\" (UniqueName: \"kubernetes.io/projected/5f8c2991-3116-46e0-b302-397291db1589-kube-api-access-qpmck\") pod \"5f8c2991-3116-46e0-b302-397291db1589\" (UID: \"5f8c2991-3116-46e0-b302-397291db1589\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829569 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9df403-a190-484e-8b58-2e33e44ddd85-logs\") pod \"bb9df403-a190-484e-8b58-2e33e44ddd85\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829614 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-config\") pod \"8b4964c0-2913-437d-b391-ed607a8fa813\" (UID: \"8b4964c0-2913-437d-b391-ed607a8fa813\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.829635 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-combined-ca-bundle\") pod \"bb9df403-a190-484e-8b58-2e33e44ddd85\" (UID: \"bb9df403-a190-484e-8b58-2e33e44ddd85\") " Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.830382 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9df403-a190-484e-8b58-2e33e44ddd85-logs" (OuterVolumeSpecName: "logs") pod "bb9df403-a190-484e-8b58-2e33e44ddd85" (UID: "bb9df403-a190-484e-8b58-2e33e44ddd85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.834476 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9df403-a190-484e-8b58-2e33e44ddd85-kube-api-access-45tvw" (OuterVolumeSpecName: "kube-api-access-45tvw") pod "bb9df403-a190-484e-8b58-2e33e44ddd85" (UID: "bb9df403-a190-484e-8b58-2e33e44ddd85"). InnerVolumeSpecName "kube-api-access-45tvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.840705 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b4964c0-2913-437d-b391-ed607a8fa813-kube-api-access-s97f6" (OuterVolumeSpecName: "kube-api-access-s97f6") pod "8b4964c0-2913-437d-b391-ed607a8fa813" (UID: "8b4964c0-2913-437d-b391-ed607a8fa813"). InnerVolumeSpecName "kube-api-access-s97f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.861783 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8c2991-3116-46e0-b302-397291db1589-kube-api-access-qpmck" (OuterVolumeSpecName: "kube-api-access-qpmck") pod "5f8c2991-3116-46e0-b302-397291db1589" (UID: "5f8c2991-3116-46e0-b302-397291db1589"). InnerVolumeSpecName "kube-api-access-qpmck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.870861 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b4964c0-2913-437d-b391-ed607a8fa813" (UID: "8b4964c0-2913-437d-b391-ed607a8fa813"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.872938 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-config" (OuterVolumeSpecName: "config") pod "8b4964c0-2913-437d-b391-ed607a8fa813" (UID: "8b4964c0-2913-437d-b391-ed607a8fa813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.890135 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.897416 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f8c2991-3116-46e0-b302-397291db1589" (UID: "5f8c2991-3116-46e0-b302-397291db1589"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.898714 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f8c2991-3116-46e0-b302-397291db1589" (UID: "5f8c2991-3116-46e0-b302-397291db1589"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.907605 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-config" (OuterVolumeSpecName: "config") pod "5f8c2991-3116-46e0-b302-397291db1589" (UID: "5f8c2991-3116-46e0-b302-397291db1589"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.912098 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f8c2991-3116-46e0-b302-397291db1589" (UID: "5f8c2991-3116-46e0-b302-397291db1589"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.912409 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb9df403-a190-484e-8b58-2e33e44ddd85" (UID: "bb9df403-a190-484e-8b58-2e33e44ddd85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.917346 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bb9df403-a190-484e-8b58-2e33e44ddd85" (UID: "bb9df403-a190-484e-8b58-2e33e44ddd85"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.922405 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f8c2991-3116-46e0-b302-397291db1589" (UID: "5f8c2991-3116-46e0-b302-397291db1589"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.924365 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-config-data" (OuterVolumeSpecName: "config-data") pod "bb9df403-a190-484e-8b58-2e33e44ddd85" (UID: "bb9df403-a190-484e-8b58-2e33e44ddd85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932303 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932338 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932349 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932359 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpmck\" (UniqueName: \"kubernetes.io/projected/5f8c2991-3116-46e0-b302-397291db1589-kube-api-access-qpmck\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932369 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb9df403-a190-484e-8b58-2e33e44ddd85-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932379 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b4964c0-2913-437d-b391-ed607a8fa813-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932387 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932395 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932403 5007 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932412 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45tvw\" (UniqueName: \"kubernetes.io/projected/bb9df403-a190-484e-8b58-2e33e44ddd85-kube-api-access-45tvw\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932421 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9df403-a190-484e-8b58-2e33e44ddd85-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932457 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s97f6\" (UniqueName: \"kubernetes.io/projected/8b4964c0-2913-437d-b391-ed607a8fa813-kube-api-access-s97f6\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932466 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[5007]: I0218 19:56:47.932474 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8c2991-3116-46e0-b302-397291db1589-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.680665 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0bb7faf9-0f01-403d-9c59-c604a002ad4a","Type":"ContainerStarted","Data":"eda71672dfdca4793f88b1f52ebb9e9b718064ef8ec6a93d1d8368dd7fd73a34"} Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.680717 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.680768 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.680787 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-44gls" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.681147 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerName="glance-log" containerID="cri-o://1c44635720ed40aa8f5176fcdeb52628db658cb7ac417887edf2e5224e5fbbbe" gracePeriod=30 Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.681315 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerName="glance-httpd" containerID="cri-o://eda71672dfdca4793f88b1f52ebb9e9b718064ef8ec6a93d1d8368dd7fd73a34" gracePeriod=30 Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.715303 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75fb45b9c-prmzx"] Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.736963 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75fb45b9c-prmzx"] Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.748866 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.748842886 podStartE2EDuration="31.748842886s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:48.726722507 +0000 UTC m=+1093.508842031" watchObservedRunningTime="2026-02-18 19:56:48.748842886 +0000 UTC m=+1093.530962420" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.773444 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.783541 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.805723 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:48 crc kubenswrapper[5007]: E0218 19:56:48.806072 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="init" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806089 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="init" Feb 18 19:56:48 crc kubenswrapper[5007]: E0218 19:56:48.806105 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="dnsmasq-dns" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806112 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="dnsmasq-dns" Feb 18 19:56:48 crc kubenswrapper[5007]: E0218 19:56:48.806131 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806136 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" Feb 18 19:56:48 crc kubenswrapper[5007]: E0218 19:56:48.806145 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4964c0-2913-437d-b391-ed607a8fa813" containerName="neutron-db-sync" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806150 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4964c0-2913-437d-b391-ed607a8fa813" containerName="neutron-db-sync" Feb 18 19:56:48 crc kubenswrapper[5007]: E0218 19:56:48.806169 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api-log" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806175 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api-log" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806357 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4964c0-2913-437d-b391-ed607a8fa813" containerName="neutron-db-sync" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806370 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806384 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" containerName="watcher-api-log" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.806395 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="dnsmasq-dns" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.807740 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.809707 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.815227 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.958597 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-config-data\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.958639 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.958676 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6612742-0c85-4677-9495-0e93227acfd6-logs\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.958754 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.958807 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wc5v\" (UniqueName: \"kubernetes.io/projected/a6612742-0c85-4677-9495-0e93227acfd6-kube-api-access-6wc5v\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.994701 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-765c898489-2trsf"] Feb 18 19:56:48 crc kubenswrapper[5007]: I0218 19:56:48.996146 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.023283 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-765c898489-2trsf"] Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.060087 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.060187 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wc5v\" (UniqueName: \"kubernetes.io/projected/a6612742-0c85-4677-9495-0e93227acfd6-kube-api-access-6wc5v\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.060251 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-config-data\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.060268 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.060301 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6612742-0c85-4677-9495-0e93227acfd6-logs\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.060646 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6612742-0c85-4677-9495-0e93227acfd6-logs\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.068077 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.068474 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-config-data\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.068719 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.094237 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wc5v\" (UniqueName: \"kubernetes.io/projected/a6612742-0c85-4677-9495-0e93227acfd6-kube-api-access-6wc5v\") pod \"watcher-api-0\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.125737 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6977b8795d-9nwjb"] Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.127235 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.129517 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.146186 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.146364 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.146483 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j4zws" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.147003 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.150922 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6977b8795d-9nwjb"] Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.162633 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9bth\" (UniqueName: \"kubernetes.io/projected/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-kube-api-access-f9bth\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.162689 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-config\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.162741 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-nb\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.162768 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-swift-storage-0\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.162830 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-svc\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.162848 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-sb\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.232400 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.232486 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.264801 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-svc\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.264841 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-sb\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.264871 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbl96\" (UniqueName: \"kubernetes.io/projected/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-kube-api-access-kbl96\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.264891 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-ovndb-tls-certs\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.264927 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-config\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.264956 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9bth\" (UniqueName: \"kubernetes.io/projected/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-kube-api-access-f9bth\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.264978 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-httpd-config\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.265002 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-config\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.265047 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-nb\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.265066 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-combined-ca-bundle\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.265090 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-swift-storage-0\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.265794 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-svc\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.265940 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-swift-storage-0\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.266484 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-sb\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.266980 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-config\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.267073 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-nb\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.288477 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9bth\" (UniqueName: \"kubernetes.io/projected/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-kube-api-access-f9bth\") pod \"dnsmasq-dns-765c898489-2trsf\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.341890 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.367007 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-combined-ca-bundle\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.367115 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbl96\" (UniqueName: \"kubernetes.io/projected/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-kube-api-access-kbl96\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.367138 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-ovndb-tls-certs\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.367168 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-config\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.367203 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-httpd-config\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.373793 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-httpd-config\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.375139 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-config\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.375758 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-combined-ca-bundle\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.375902 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-ovndb-tls-certs\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.386343 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbl96\" (UniqueName: \"kubernetes.io/projected/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-kube-api-access-kbl96\") pod \"neutron-6977b8795d-9nwjb\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: E0218 19:56:49.401093 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 18 19:56:49 crc kubenswrapper[5007]: E0218 19:56:49.401153 5007 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 18 19:56:49 crc kubenswrapper[5007]: E0218 19:56:49.401277 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grhn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-w57rb_openstack(c60b1e07-e980-450b-bfb8-c0210edeae7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:56:49 crc kubenswrapper[5007]: E0218 19:56:49.402672 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-w57rb" podUID="c60b1e07-e980-450b-bfb8-c0210edeae7d" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.494237 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.694782 5007 generic.go:334] "Generic (PLEG): container finished" podID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerID="eda71672dfdca4793f88b1f52ebb9e9b718064ef8ec6a93d1d8368dd7fd73a34" exitCode=0 Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.695123 5007 generic.go:334] "Generic (PLEG): container finished" podID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerID="1c44635720ed40aa8f5176fcdeb52628db658cb7ac417887edf2e5224e5fbbbe" exitCode=143 Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.694873 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0bb7faf9-0f01-403d-9c59-c604a002ad4a","Type":"ContainerDied","Data":"eda71672dfdca4793f88b1f52ebb9e9b718064ef8ec6a93d1d8368dd7fd73a34"} Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.695267 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0bb7faf9-0f01-403d-9c59-c604a002ad4a","Type":"ContainerDied","Data":"1c44635720ed40aa8f5176fcdeb52628db658cb7ac417887edf2e5224e5fbbbe"} Feb 18 19:56:49 crc kubenswrapper[5007]: E0218 19:56:49.696843 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-w57rb" podUID="c60b1e07-e980-450b-bfb8-c0210edeae7d" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.930721 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8c2991-3116-46e0-b302-397291db1589" path="/var/lib/kubelet/pods/5f8c2991-3116-46e0-b302-397291db1589/volumes" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.931471 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9df403-a190-484e-8b58-2e33e44ddd85" path="/var/lib/kubelet/pods/bb9df403-a190-484e-8b58-2e33e44ddd85/volumes" Feb 18 19:56:49 crc kubenswrapper[5007]: I0218 19:56:49.932005 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7957464b6-627s5"] Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.148530 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-784db9c5db-lr9xh"] Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.203712 5007 scope.go:117] "RemoveContainer" containerID="9ba94a3f48a4ea3c1686b42193961341f636681296c9030bdf268c1d9b5034d2" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.474564 5007 scope.go:117] "RemoveContainer" containerID="4097dfd0d03a4ab8a4ac4824c85d001012187b444db723ea079e4fd8bcc1dba9" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.496337 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.598866 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-httpd-run\") pod \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.598899 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-logs\") pod \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.598963 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-config-data\") pod \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.599023 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-combined-ca-bundle\") pod \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.599049 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86lh2\" (UniqueName: \"kubernetes.io/projected/0bb7faf9-0f01-403d-9c59-c604a002ad4a-kube-api-access-86lh2\") pod \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.599074 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-internal-tls-certs\") pod \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.599149 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.599199 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-scripts\") pod \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\" (UID: \"0bb7faf9-0f01-403d-9c59-c604a002ad4a\") " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.600802 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-logs" (OuterVolumeSpecName: "logs") pod "0bb7faf9-0f01-403d-9c59-c604a002ad4a" (UID: "0bb7faf9-0f01-403d-9c59-c604a002ad4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.602307 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0bb7faf9-0f01-403d-9c59-c604a002ad4a" (UID: "0bb7faf9-0f01-403d-9c59-c604a002ad4a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.605081 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "0bb7faf9-0f01-403d-9c59-c604a002ad4a" (UID: "0bb7faf9-0f01-403d-9c59-c604a002ad4a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.609355 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb7faf9-0f01-403d-9c59-c604a002ad4a-kube-api-access-86lh2" (OuterVolumeSpecName: "kube-api-access-86lh2") pod "0bb7faf9-0f01-403d-9c59-c604a002ad4a" (UID: "0bb7faf9-0f01-403d-9c59-c604a002ad4a"). InnerVolumeSpecName "kube-api-access-86lh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.628578 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-scripts" (OuterVolumeSpecName: "scripts") pod "0bb7faf9-0f01-403d-9c59-c604a002ad4a" (UID: "0bb7faf9-0f01-403d-9c59-c604a002ad4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.702235 5007 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.702600 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bb7faf9-0f01-403d-9c59-c604a002ad4a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.702611 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86lh2\" (UniqueName: \"kubernetes.io/projected/0bb7faf9-0f01-403d-9c59-c604a002ad4a-kube-api-access-86lh2\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.702640 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.702651 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.716736 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-784db9c5db-lr9xh" event={"ID":"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2","Type":"ContainerStarted","Data":"310b2c37092f2792cc8919b8cd3e3df425dacac21b0138ea04e7f6e4d4ef4fe0"} Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.718628 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7957464b6-627s5" event={"ID":"b695b866-e754-46f6-8cd8-ef152bfc1a54","Type":"ContainerStarted","Data":"56d9874e011ddb5e2d4aff8156a73bc12319723e11e4596ba52d3e14739b7cd4"} Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.731585 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0bb7faf9-0f01-403d-9c59-c604a002ad4a","Type":"ContainerDied","Data":"835f2546213125f9b585ee6b101ec1f683c7a9093e0019f8695c16aa839ba80a"} Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.731634 5007 scope.go:117] "RemoveContainer" containerID="eda71672dfdca4793f88b1f52ebb9e9b718064ef8ec6a93d1d8368dd7fd73a34" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.731705 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.806700 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.835968 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zljlc"] Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.875461 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bb7faf9-0f01-403d-9c59-c604a002ad4a" (UID: "0bb7faf9-0f01-403d-9c59-c604a002ad4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.885593 5007 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.910869 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.910897 5007 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.927962 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0bb7faf9-0f01-403d-9c59-c604a002ad4a" (UID: "0bb7faf9-0f01-403d-9c59-c604a002ad4a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:50 crc kubenswrapper[5007]: I0218 19:56:50.994228 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-config-data" (OuterVolumeSpecName: "config-data") pod "0bb7faf9-0f01-403d-9c59-c604a002ad4a" (UID: "0bb7faf9-0f01-403d-9c59-c604a002ad4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.007375 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-765c898489-2trsf"] Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.013824 5007 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.013860 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bb7faf9-0f01-403d-9c59-c604a002ad4a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.018214 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.035543 5007 scope.go:117] "RemoveContainer" containerID="1c44635720ed40aa8f5176fcdeb52628db658cb7ac417887edf2e5224e5fbbbe" Feb 18 19:56:51 crc kubenswrapper[5007]: W0218 19:56:51.099521 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbda4b1dd_7a09_4ee0_9fd4_dafacc7a151c.slice/crio-c3d5965a0c722bdf99ba4c46cff9ebb12a3ee423b674c313fb6ae4ad697705ef WatchSource:0}: Error finding container c3d5965a0c722bdf99ba4c46cff9ebb12a3ee423b674c313fb6ae4ad697705ef: Status 404 returned error can't find the container with id c3d5965a0c722bdf99ba4c46cff9ebb12a3ee423b674c313fb6ae4ad697705ef Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.133721 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.144735 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.162282 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:51 crc kubenswrapper[5007]: E0218 19:56:51.162706 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerName="glance-log" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.162722 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerName="glance-log" Feb 18 19:56:51 crc kubenswrapper[5007]: E0218 19:56:51.162748 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerName="glance-httpd" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.162755 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerName="glance-httpd" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.162987 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerName="glance-httpd" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.163015 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" containerName="glance-log" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.164051 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.168986 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.169179 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.180913 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.242126 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75fb45b9c-prmzx" podUID="5f8c2991-3116-46e0-b302-397291db1589" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.309873 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6977b8795d-9nwjb"] Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.323157 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.323235 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.323414 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.323452 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6p58\" (UniqueName: \"kubernetes.io/projected/b43ec936-7a25-476a-8f4d-a5734f6f6044-kube-api-access-b6p58\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.323479 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-logs\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.323498 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.323591 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.323717 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.361570 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bb55c746c-lpr88"] Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.363134 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.367279 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.367396 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.373442 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb55c746c-lpr88"] Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425589 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425641 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-ovndb-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425674 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-public-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425702 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n89g\" (UniqueName: \"kubernetes.io/projected/b110d261-b844-4d86-9226-c46707c2ecd4-kube-api-access-4n89g\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425738 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425762 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-httpd-config\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425786 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6p58\" (UniqueName: \"kubernetes.io/projected/b43ec936-7a25-476a-8f4d-a5734f6f6044-kube-api-access-b6p58\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425820 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-combined-ca-bundle\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425846 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-logs\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425882 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425911 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425938 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.425985 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-config\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.426033 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.426056 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-internal-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.427989 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.428873 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-logs\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.429179 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.440527 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.440610 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.441260 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.441820 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.457122 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6p58\" (UniqueName: \"kubernetes.io/projected/b43ec936-7a25-476a-8f4d-a5734f6f6044-kube-api-access-b6p58\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.527707 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-config\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.527810 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-internal-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.527916 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-ovndb-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.527956 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-public-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.527998 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n89g\" (UniqueName: \"kubernetes.io/projected/b110d261-b844-4d86-9226-c46707c2ecd4-kube-api-access-4n89g\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.528056 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-httpd-config\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.528096 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-combined-ca-bundle\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.549107 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-internal-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.549259 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n89g\" (UniqueName: \"kubernetes.io/projected/b110d261-b844-4d86-9226-c46707c2ecd4-kube-api-access-4n89g\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.549497 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-combined-ca-bundle\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.558876 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-httpd-config\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.561043 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-ovndb-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.561404 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-config\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.563301 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-public-tls-certs\") pod \"neutron-bb55c746c-lpr88\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.632536 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.713944 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.766766 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6977b8795d-9nwjb" event={"ID":"a5926c07-fe6b-49a1-bb8a-6b630aa69d76","Type":"ContainerStarted","Data":"2d4d8da2f1fbfedd9480f7ad7acfd6b7ee7340ebb1f8425c18083792a7b268c4"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.774972 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68dccdd97-4vtx9" event={"ID":"1739350e-8a84-4c46-9bd2-7d98b9007dae","Type":"ContainerStarted","Data":"99f4fa773fc11c88474dd29ebd95a27024d8b08f5e4fd87e2072568e925548ae"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.782788 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a6612742-0c85-4677-9495-0e93227acfd6","Type":"ContainerStarted","Data":"206b1e3d418574863021802f67e3d66e6c0e6110435585fce9a928a1b6419270"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.790138 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcdb880d-97da-4651-983e-8ccef0ea3d51","Type":"ContainerStarted","Data":"704fcd518164747a9c307c2a17c09f89da5c9fe240e8604af41dd1f96181671c"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.803633 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zljlc" event={"ID":"ae6b2577-67ac-4c20-8242-e9533bab124d","Type":"ContainerStarted","Data":"05e36c4228d1cc1c00380382b72699f81f8d6012e14cd2a670ff6e0f3de9e124"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.803702 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zljlc" event={"ID":"ae6b2577-67ac-4c20-8242-e9533bab124d","Type":"ContainerStarted","Data":"25f2034a613945504b5994821c789ecdacccda4cc46cb711e99af9563e732559"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.807763 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-784db9c5db-lr9xh" event={"ID":"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2","Type":"ContainerStarted","Data":"1282edef71e6874b467ed84fa45a1490a9fb5476f4f75e76b8e7d72c9a1d8ca1"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.809691 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7957464b6-627s5" event={"ID":"b695b866-e754-46f6-8cd8-ef152bfc1a54","Type":"ContainerStarted","Data":"a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.824904 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.855828 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"a6551271-9859-4f43-8a89-5c5da823ef56","Type":"ContainerStarted","Data":"f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.884367 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zljlc" podStartSLOduration=15.884345634 podStartE2EDuration="15.884345634s" podCreationTimestamp="2026-02-18 19:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:51.84029181 +0000 UTC m=+1096.622411334" watchObservedRunningTime="2026-02-18 19:56:51.884345634 +0000 UTC m=+1096.666465158" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.910233 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerStarted","Data":"53bba64338fdec42ba5d3b6fe3fa5d56b1967880c71e3a466c09f3e30962f70b"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.932987 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=19.437501549 podStartE2EDuration="34.932946888s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="2026-02-18 19:56:19.482267202 +0000 UTC m=+1064.264386726" lastFinishedPulling="2026-02-18 19:56:34.977712531 +0000 UTC m=+1079.759832065" observedRunningTime="2026-02-18 19:56:51.917660193 +0000 UTC m=+1096.699779717" watchObservedRunningTime="2026-02-18 19:56:51.932946888 +0000 UTC m=+1096.715066412" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.961243 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb7faf9-0f01-403d-9c59-c604a002ad4a" path="/var/lib/kubelet/pods/0bb7faf9-0f01-403d-9c59-c604a002ad4a/volumes" Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.962274 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qz65k" event={"ID":"2f906c05-1123-4e20-9792-a4930e299212","Type":"ContainerStarted","Data":"c6e74b32f049099f48f3116d2e3f2d557d1c920e4030fd13573301d9ed0493d6"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.962301 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765c898489-2trsf" event={"ID":"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c","Type":"ContainerStarted","Data":"c3d5965a0c722bdf99ba4c46cff9ebb12a3ee423b674c313fb6ae4ad697705ef"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.962312 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59b5956c5c-bhzjl" event={"ID":"e42c854b-1439-4a01-9a89-c034c0272d32","Type":"ContainerStarted","Data":"e484dfd4f1aa3c42e9598dd88b4a2457edfbf2c055f7b84b2df9ad77ea124bd0"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.967109 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af8794a0-c909-4418-ab5a-a4d89492f37f","Type":"ContainerStarted","Data":"9a851589bd0be215406165574c6cd5d804eead29f4ff260c78068f33cd79e824"} Feb 18 19:56:51 crc kubenswrapper[5007]: I0218 19:56:51.976774 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=19.027540955 podStartE2EDuration="34.976748446s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="2026-02-18 19:56:19.528058396 +0000 UTC m=+1064.310177920" lastFinishedPulling="2026-02-18 19:56:35.477265887 +0000 UTC m=+1080.259385411" observedRunningTime="2026-02-18 19:56:51.951364503 +0000 UTC m=+1096.733484027" watchObservedRunningTime="2026-02-18 19:56:51.976748446 +0000 UTC m=+1096.758867970" Feb 18 19:56:52 crc kubenswrapper[5007]: I0218 19:56:52.004885 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b6f5f89-58c25" event={"ID":"203ba913-bbde-4801-b85b-b472aef6a7f6","Type":"ContainerStarted","Data":"f4e898144095e4cab8ba7538f7ee70876d31691f3d2d74efd7147971910d92ee"} Feb 18 19:56:52 crc kubenswrapper[5007]: I0218 19:56:52.008586 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qz65k" podStartSLOduration=6.510907931 podStartE2EDuration="34.008570172s" podCreationTimestamp="2026-02-18 19:56:18 +0000 UTC" firstStartedPulling="2026-02-18 19:56:20.056824094 +0000 UTC m=+1064.838943618" lastFinishedPulling="2026-02-18 19:56:47.554486315 +0000 UTC m=+1092.336605859" observedRunningTime="2026-02-18 19:56:52.005599427 +0000 UTC m=+1096.787718951" watchObservedRunningTime="2026-02-18 19:56:52.008570172 +0000 UTC m=+1096.790689696" Feb 18 19:56:52 crc kubenswrapper[5007]: I0218 19:56:52.334374 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb55c746c-lpr88"] Feb 18 19:56:52 crc kubenswrapper[5007]: I0218 19:56:52.778508 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:56:52 crc kubenswrapper[5007]: W0218 19:56:52.808319 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb43ec936_7a25_476a_8f4d_a5734f6f6044.slice/crio-cb920f6390840226cc88b8eddd70fbbda238f5845e2058e8c27ee401e7c82195 WatchSource:0}: Error finding container cb920f6390840226cc88b8eddd70fbbda238f5845e2058e8c27ee401e7c82195: Status 404 returned error can't find the container with id cb920f6390840226cc88b8eddd70fbbda238f5845e2058e8c27ee401e7c82195 Feb 18 19:56:52 crc kubenswrapper[5007]: I0218 19:56:52.875605 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:56:53 crc kubenswrapper[5007]: I0218 19:56:53.034472 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a6612742-0c85-4677-9495-0e93227acfd6","Type":"ContainerStarted","Data":"d4765d90ee3f0ec098c71741af727d967f71cab1abf6cb33a30b82cdefc3c0da"} Feb 18 19:56:53 crc kubenswrapper[5007]: I0218 19:56:53.041133 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb55c746c-lpr88" event={"ID":"b110d261-b844-4d86-9226-c46707c2ecd4","Type":"ContainerStarted","Data":"5c686746e0cb543e5830a07c125dcacd61faff2ebe2c48008ed1443cd7338917"} Feb 18 19:56:53 crc kubenswrapper[5007]: I0218 19:56:53.065275 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6977b8795d-9nwjb" event={"ID":"a5926c07-fe6b-49a1-bb8a-6b630aa69d76","Type":"ContainerStarted","Data":"21cbe18a0113f465ef646b2258c92a08fb33cf33cd6d39fc44868596e5abaff6"} Feb 18 19:56:53 crc kubenswrapper[5007]: I0218 19:56:53.070823 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-784db9c5db-lr9xh" event={"ID":"4c312663-7b4a-468e-a0f8-62c9ef9e3fc2","Type":"ContainerStarted","Data":"a522a3e4d7bfc8b39324ad65c8805ae6320b1ffbd6b8dd3f385698c7e3b7f08b"} Feb 18 19:56:53 crc kubenswrapper[5007]: I0218 19:56:53.072541 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b43ec936-7a25-476a-8f4d-a5734f6f6044","Type":"ContainerStarted","Data":"cb920f6390840226cc88b8eddd70fbbda238f5845e2058e8c27ee401e7c82195"} Feb 18 19:56:53 crc kubenswrapper[5007]: I0218 19:56:53.096700 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-784db9c5db-lr9xh" podStartSLOduration=26.096682538 podStartE2EDuration="26.096682538s" podCreationTimestamp="2026-02-18 19:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:53.095751901 +0000 UTC m=+1097.877871445" watchObservedRunningTime="2026-02-18 19:56:53.096682538 +0000 UTC m=+1097.878802052" Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.118937 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcdb880d-97da-4651-983e-8ccef0ea3d51","Type":"ContainerStarted","Data":"5a7f435e5054236a3e70856dbaa01ee68eb64cba98fc4214bb88521857486b4c"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.140967 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6977b8795d-9nwjb" event={"ID":"a5926c07-fe6b-49a1-bb8a-6b630aa69d76","Type":"ContainerStarted","Data":"78f925ced014a3d97f31a5bc4811bc093815cb2e14e282e1a34d374464a16138"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.142657 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.151610 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb55c746c-lpr88" event={"ID":"b110d261-b844-4d86-9226-c46707c2ecd4","Type":"ContainerStarted","Data":"bf06207a9ac56ae4342853d3d79982a6a06136f83560a6fbb0cb101b3b10dc8b"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.168294 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b6f5f89-58c25" event={"ID":"203ba913-bbde-4801-b85b-b472aef6a7f6","Type":"ContainerStarted","Data":"a1e6b695ad6ef30972f811b240200ac9367d0685df85ce0e84abc110365fba0a"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.168598 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68b6f5f89-58c25" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerName="horizon-log" containerID="cri-o://f4e898144095e4cab8ba7538f7ee70876d31691f3d2d74efd7147971910d92ee" gracePeriod=30 Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.168679 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68b6f5f89-58c25" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerName="horizon" containerID="cri-o://a1e6b695ad6ef30972f811b240200ac9367d0685df85ce0e84abc110365fba0a" gracePeriod=30 Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.181409 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6977b8795d-9nwjb" podStartSLOduration=5.181386517 podStartE2EDuration="5.181386517s" podCreationTimestamp="2026-02-18 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:54.166096122 +0000 UTC m=+1098.948215646" watchObservedRunningTime="2026-02-18 19:56:54.181386517 +0000 UTC m=+1098.963506041" Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.206154 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68b6f5f89-58c25" podStartSLOduration=5.671498282 podStartE2EDuration="34.206129532s" podCreationTimestamp="2026-02-18 19:56:20 +0000 UTC" firstStartedPulling="2026-02-18 19:56:21.714457848 +0000 UTC m=+1066.496577372" lastFinishedPulling="2026-02-18 19:56:50.249089098 +0000 UTC m=+1095.031208622" observedRunningTime="2026-02-18 19:56:54.194086809 +0000 UTC m=+1098.976206353" watchObservedRunningTime="2026-02-18 19:56:54.206129532 +0000 UTC m=+1098.988249076" Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.216960 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7957464b6-627s5" event={"ID":"b695b866-e754-46f6-8cd8-ef152bfc1a54","Type":"ContainerStarted","Data":"60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.223290 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68dccdd97-4vtx9" event={"ID":"1739350e-8a84-4c46-9bd2-7d98b9007dae","Type":"ContainerStarted","Data":"1fd64123f497b24af14d4d0c821251c079c19ee93a4cb7637f2dfd2218fa1901"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.223405 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68dccdd97-4vtx9" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerName="horizon-log" containerID="cri-o://99f4fa773fc11c88474dd29ebd95a27024d8b08f5e4fd87e2072568e925548ae" gracePeriod=30 Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.223625 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68dccdd97-4vtx9" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerName="horizon" containerID="cri-o://1fd64123f497b24af14d4d0c821251c079c19ee93a4cb7637f2dfd2218fa1901" gracePeriod=30 Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.235951 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a6612742-0c85-4677-9495-0e93227acfd6","Type":"ContainerStarted","Data":"9b88bcf24cf06997838cb08406b0c296187409e5f8b2b2dd44ff61e4b2ae03c4"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.237141 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.286048 5007 generic.go:334] "Generic (PLEG): container finished" podID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerID="42510dba712afe0dfec9d7ca0d467d36fd35d97f9b71ab5b241e3fcdabbf0316" exitCode=0 Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.286157 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765c898489-2trsf" event={"ID":"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c","Type":"ContainerDied","Data":"42510dba712afe0dfec9d7ca0d467d36fd35d97f9b71ab5b241e3fcdabbf0316"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.298342 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59b5956c5c-bhzjl" event={"ID":"e42c854b-1439-4a01-9a89-c034c0272d32","Type":"ContainerStarted","Data":"1d5242aa9ee3c014a0334fb9e3ee9735daa0229342c6e2f016635da62e01a950"} Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.298574 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-59b5956c5c-bhzjl" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" containerName="horizon-log" containerID="cri-o://e484dfd4f1aa3c42e9598dd88b4a2457edfbf2c055f7b84b2df9ad77ea124bd0" gracePeriod=30 Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.299126 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-59b5956c5c-bhzjl" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" containerName="horizon" containerID="cri-o://1d5242aa9ee3c014a0334fb9e3ee9735daa0229342c6e2f016635da62e01a950" gracePeriod=30 Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.303714 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7957464b6-627s5" podStartSLOduration=27.303689779 podStartE2EDuration="27.303689779s" podCreationTimestamp="2026-02-18 19:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:54.243112385 +0000 UTC m=+1099.025231909" watchObservedRunningTime="2026-02-18 19:56:54.303689779 +0000 UTC m=+1099.085809303" Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.333000 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68dccdd97-4vtx9" podStartSLOduration=7.031607561 podStartE2EDuration="37.332454728s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="2026-02-18 19:56:19.082204399 +0000 UTC m=+1063.864323923" lastFinishedPulling="2026-02-18 19:56:49.383051566 +0000 UTC m=+1094.165171090" observedRunningTime="2026-02-18 19:56:54.297879433 +0000 UTC m=+1099.079998977" watchObservedRunningTime="2026-02-18 19:56:54.332454728 +0000 UTC m=+1099.114574252" Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.364560 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.364537971 podStartE2EDuration="6.364537971s" podCreationTimestamp="2026-02-18 19:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:54.32759984 +0000 UTC m=+1099.109719384" watchObservedRunningTime="2026-02-18 19:56:54.364537971 +0000 UTC m=+1099.146657495" Feb 18 19:56:54 crc kubenswrapper[5007]: I0218 19:56:54.388603 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-59b5956c5c-bhzjl" podStartSLOduration=9.600856745 podStartE2EDuration="37.388582856s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="2026-02-18 19:56:19.732957181 +0000 UTC m=+1064.515076705" lastFinishedPulling="2026-02-18 19:56:47.520683302 +0000 UTC m=+1092.302802816" observedRunningTime="2026-02-18 19:56:54.365802808 +0000 UTC m=+1099.147922342" watchObservedRunningTime="2026-02-18 19:56:54.388582856 +0000 UTC m=+1099.170702380" Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.362396 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb55c746c-lpr88" event={"ID":"b110d261-b844-4d86-9226-c46707c2ecd4","Type":"ContainerStarted","Data":"2de1370864bd8d4db31250bfbcf1aea05924b76da01ff9133f9f833454f7729e"} Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.363013 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.366400 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b43ec936-7a25-476a-8f4d-a5734f6f6044","Type":"ContainerStarted","Data":"9a89c0d2e562f5ff4df3f15db7317e07d5f4d3e0a6e908649e6e2c9127a84298"} Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.368332 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765c898489-2trsf" event={"ID":"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c","Type":"ContainerStarted","Data":"b55d84867b96866b4985b5b892baa75aeed5f96ed4b31004b4c3b000262a13cb"} Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.369062 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.372018 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcdb880d-97da-4651-983e-8ccef0ea3d51","Type":"ContainerStarted","Data":"748a01dc1585cdee74af6c83fbe1e9b37faac76a7ac811dbf84751d32d0ee76a"} Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.385412 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bb55c746c-lpr88" podStartSLOduration=4.385398543 podStartE2EDuration="4.385398543s" podCreationTimestamp="2026-02-18 19:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:55.381468281 +0000 UTC m=+1100.163587815" watchObservedRunningTime="2026-02-18 19:56:55.385398543 +0000 UTC m=+1100.167518067" Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.407293 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-765c898489-2trsf" podStartSLOduration=7.407272775 podStartE2EDuration="7.407272775s" podCreationTimestamp="2026-02-18 19:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:55.406775081 +0000 UTC m=+1100.188894615" watchObservedRunningTime="2026-02-18 19:56:55.407272775 +0000 UTC m=+1100.189392299" Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.456344 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=20.456305692 podStartE2EDuration="20.456305692s" podCreationTimestamp="2026-02-18 19:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:55.430888098 +0000 UTC m=+1100.213007632" watchObservedRunningTime="2026-02-18 19:56:55.456305692 +0000 UTC m=+1100.238425216" Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.947368 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:56:55 crc kubenswrapper[5007]: I0218 19:56:55.947746 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.007663 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.010711 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.386066 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b43ec936-7a25-476a-8f4d-a5734f6f6044","Type":"ContainerStarted","Data":"75ab128b0d92742a8fd6e43abd579e86b53cadc0d777c173e1ba83ee74ef0139"} Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.388415 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-spkvd" event={"ID":"757421ca-4bfc-42d6-ae73-bb479f3b5975","Type":"ContainerStarted","Data":"197db4c397b15798e3dbbb0c31b09c8262bebec449fb2e861f0cf06517226316"} Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.388774 5007 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.389135 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.389155 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.424739 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.424720209 podStartE2EDuration="5.424720209s" podCreationTimestamp="2026-02-18 19:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:56.406044087 +0000 UTC m=+1101.188163621" watchObservedRunningTime="2026-02-18 19:56:56.424720209 +0000 UTC m=+1101.206839733" Feb 18 19:56:56 crc kubenswrapper[5007]: I0218 19:56:56.433262 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-spkvd" podStartSLOduration=3.818158514 podStartE2EDuration="39.433247202s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="2026-02-18 19:56:19.53276943 +0000 UTC m=+1064.314888954" lastFinishedPulling="2026-02-18 19:56:55.147858128 +0000 UTC m=+1099.929977642" observedRunningTime="2026-02-18 19:56:56.419936663 +0000 UTC m=+1101.202056187" watchObservedRunningTime="2026-02-18 19:56:56.433247202 +0000 UTC m=+1101.215366726" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.113227 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.398498 5007 generic.go:334] "Generic (PLEG): container finished" podID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerID="53bba64338fdec42ba5d3b6fe3fa5d56b1967880c71e3a466c09f3e30962f70b" exitCode=1 Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.398558 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerDied","Data":"53bba64338fdec42ba5d3b6fe3fa5d56b1967880c71e3a466c09f3e30962f70b"} Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.399217 5007 scope.go:117] "RemoveContainer" containerID="53bba64338fdec42ba5d3b6fe3fa5d56b1967880c71e3a466c09f3e30962f70b" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.404185 5007 generic.go:334] "Generic (PLEG): container finished" podID="2f906c05-1123-4e20-9792-a4930e299212" containerID="c6e74b32f049099f48f3116d2e3f2d557d1c920e4030fd13573301d9ed0493d6" exitCode=0 Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.404332 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qz65k" event={"ID":"2f906c05-1123-4e20-9792-a4930e299212","Type":"ContainerDied","Data":"c6e74b32f049099f48f3116d2e3f2d557d1c920e4030fd13573301d9ed0493d6"} Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.522749 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.522819 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7957464b6-627s5" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.604152 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.605402 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.875460 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.907031 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.931553 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:56:57 crc kubenswrapper[5007]: I0218 19:56:57.931613 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:56:58 crc kubenswrapper[5007]: I0218 19:56:58.157388 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:56:58 crc kubenswrapper[5007]: I0218 19:56:58.418212 5007 generic.go:334] "Generic (PLEG): container finished" podID="ae6b2577-67ac-4c20-8242-e9533bab124d" containerID="05e36c4228d1cc1c00380382b72699f81f8d6012e14cd2a670ff6e0f3de9e124" exitCode=0 Feb 18 19:56:58 crc kubenswrapper[5007]: I0218 19:56:58.418359 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zljlc" event={"ID":"ae6b2577-67ac-4c20-8242-e9533bab124d","Type":"ContainerDied","Data":"05e36c4228d1cc1c00380382b72699f81f8d6012e14cd2a670ff6e0f3de9e124"} Feb 18 19:56:58 crc kubenswrapper[5007]: I0218 19:56:58.476224 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:56:58 crc kubenswrapper[5007]: I0218 19:56:58.511720 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:56:58 crc kubenswrapper[5007]: I0218 19:56:58.592504 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:56:59 crc kubenswrapper[5007]: I0218 19:56:59.130928 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:56:59 crc kubenswrapper[5007]: I0218 19:56:59.131194 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:56:59 crc kubenswrapper[5007]: I0218 19:56:59.165617 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:56:59 crc kubenswrapper[5007]: I0218 19:56:59.344692 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:56:59 crc kubenswrapper[5007]: I0218 19:56:59.426131 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74c64d6b7c-xfb47"] Feb 18 19:56:59 crc kubenswrapper[5007]: I0218 19:56:59.426375 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerName="dnsmasq-dns" containerID="cri-o://e003b9b54791a8b096007fc8dd05ff4f0c648357d5cfe3bdccbd884cd1bdc921" gracePeriod=10 Feb 18 19:56:59 crc kubenswrapper[5007]: I0218 19:56:59.508045 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:57:00 crc kubenswrapper[5007]: I0218 19:57:00.480778 5007 generic.go:334] "Generic (PLEG): container finished" podID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerID="e003b9b54791a8b096007fc8dd05ff4f0c648357d5cfe3bdccbd884cd1bdc921" exitCode=0 Feb 18 19:57:00 crc kubenswrapper[5007]: I0218 19:57:00.480995 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" event={"ID":"f87fc392-f98a-45e5-8f18-c3efbdf19e1a","Type":"ContainerDied","Data":"e003b9b54791a8b096007fc8dd05ff4f0c648357d5cfe3bdccbd884cd1bdc921"} Feb 18 19:57:00 crc kubenswrapper[5007]: I0218 19:57:00.481172 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="a6551271-9859-4f43-8a89-5c5da823ef56" containerName="watcher-applier" containerID="cri-o://f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" gracePeriod=30 Feb 18 19:57:01 crc kubenswrapper[5007]: I0218 19:57:01.015177 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:57:01 crc kubenswrapper[5007]: I0218 19:57:01.826411 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:57:01 crc kubenswrapper[5007]: I0218 19:57:01.826665 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:57:01 crc kubenswrapper[5007]: I0218 19:57:01.870144 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:57:01 crc kubenswrapper[5007]: I0218 19:57:01.874189 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:57:02 crc kubenswrapper[5007]: I0218 19:57:02.498263 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:57:02 crc kubenswrapper[5007]: I0218 19:57:02.498302 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:57:02 crc kubenswrapper[5007]: I0218 19:57:02.747197 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:57:02 crc kubenswrapper[5007]: I0218 19:57:02.747437 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api-log" containerID="cri-o://d4765d90ee3f0ec098c71741af727d967f71cab1abf6cb33a30b82cdefc3c0da" gracePeriod=30 Feb 18 19:57:02 crc kubenswrapper[5007]: I0218 19:57:02.747508 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api" containerID="cri-o://9b88bcf24cf06997838cb08406b0c296187409e5f8b2b2dd44ff61e4b2ae03c4" gracePeriod=30 Feb 18 19:57:02 crc kubenswrapper[5007]: E0218 19:57:02.877309 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:57:02 crc kubenswrapper[5007]: E0218 19:57:02.898647 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:57:02 crc kubenswrapper[5007]: E0218 19:57:02.901676 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:57:02 crc kubenswrapper[5007]: E0218 19:57:02.901722 5007 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="a6551271-9859-4f43-8a89-5c5da823ef56" containerName="watcher-applier" Feb 18 19:57:03 crc kubenswrapper[5007]: I0218 19:57:03.506520 5007 generic.go:334] "Generic (PLEG): container finished" podID="a6612742-0c85-4677-9495-0e93227acfd6" containerID="d4765d90ee3f0ec098c71741af727d967f71cab1abf6cb33a30b82cdefc3c0da" exitCode=143 Feb 18 19:57:03 crc kubenswrapper[5007]: I0218 19:57:03.506584 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a6612742-0c85-4677-9495-0e93227acfd6","Type":"ContainerDied","Data":"d4765d90ee3f0ec098c71741af727d967f71cab1abf6cb33a30b82cdefc3c0da"} Feb 18 19:57:03 crc kubenswrapper[5007]: I0218 19:57:03.541656 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: connect: connection refused" Feb 18 19:57:04 crc kubenswrapper[5007]: I0218 19:57:04.131253 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 18 19:57:04 crc kubenswrapper[5007]: I0218 19:57:04.131487 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 18 19:57:04 crc kubenswrapper[5007]: I0218 19:57:04.521648 5007 generic.go:334] "Generic (PLEG): container finished" podID="a6612742-0c85-4677-9495-0e93227acfd6" containerID="9b88bcf24cf06997838cb08406b0c296187409e5f8b2b2dd44ff61e4b2ae03c4" exitCode=0 Feb 18 19:57:04 crc kubenswrapper[5007]: I0218 19:57:04.521885 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a6612742-0c85-4677-9495-0e93227acfd6","Type":"ContainerDied","Data":"9b88bcf24cf06997838cb08406b0c296187409e5f8b2b2dd44ff61e4b2ae03c4"} Feb 18 19:57:04 crc kubenswrapper[5007]: I0218 19:57:04.523105 5007 generic.go:334] "Generic (PLEG): container finished" podID="a6551271-9859-4f43-8a89-5c5da823ef56" containerID="f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" exitCode=0 Feb 18 19:57:04 crc kubenswrapper[5007]: I0218 19:57:04.523205 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"a6551271-9859-4f43-8a89-5c5da823ef56","Type":"ContainerDied","Data":"f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f"} Feb 18 19:57:05 crc kubenswrapper[5007]: I0218 19:57:05.700082 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:57:05 crc kubenswrapper[5007]: I0218 19:57:05.700169 5007 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:57:05 crc kubenswrapper[5007]: I0218 19:57:05.716456 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:57:07 crc kubenswrapper[5007]: I0218 19:57:07.525022 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7957464b6-627s5" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 18 19:57:07 crc kubenswrapper[5007]: I0218 19:57:07.610010 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-784db9c5db-lr9xh" podUID="4c312663-7b4a-468e-a0f8-62c9ef9e3fc2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:57:07 crc kubenswrapper[5007]: E0218 19:57:07.875814 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f is running failed: container process not found" containerID="f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:57:07 crc kubenswrapper[5007]: E0218 19:57:07.876396 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f is running failed: container process not found" containerID="f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:57:07 crc kubenswrapper[5007]: E0218 19:57:07.876629 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f is running failed: container process not found" containerID="f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:57:07 crc kubenswrapper[5007]: E0218 19:57:07.876660 5007 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="a6551271-9859-4f43-8a89-5c5da823ef56" containerName="watcher-applier" Feb 18 19:57:07 crc kubenswrapper[5007]: I0218 19:57:07.951877 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:57:08 crc kubenswrapper[5007]: I0218 19:57:08.542151 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: connect: connection refused" Feb 18 19:57:08 crc kubenswrapper[5007]: I0218 19:57:08.563821 5007 generic.go:334] "Generic (PLEG): container finished" podID="757421ca-4bfc-42d6-ae73-bb479f3b5975" containerID="197db4c397b15798e3dbbb0c31b09c8262bebec449fb2e861f0cf06517226316" exitCode=0 Feb 18 19:57:08 crc kubenswrapper[5007]: I0218 19:57:08.563880 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-spkvd" event={"ID":"757421ca-4bfc-42d6-ae73-bb479f3b5975","Type":"ContainerDied","Data":"197db4c397b15798e3dbbb0c31b09c8262bebec449fb2e861f0cf06517226316"} Feb 18 19:57:08 crc kubenswrapper[5007]: I0218 19:57:08.821020 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:57:09 crc kubenswrapper[5007]: I0218 19:57:09.130929 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 18 19:57:09 crc kubenswrapper[5007]: I0218 19:57:09.130943 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.316472 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qz65k" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.343343 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-spkvd" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.358066 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.428809 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.454062 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-scripts\") pod \"757421ca-4bfc-42d6-ae73-bb479f3b5975\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.454161 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-combined-ca-bundle\") pod \"2f906c05-1123-4e20-9792-a4930e299212\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.454206 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-config-data\") pod \"757421ca-4bfc-42d6-ae73-bb479f3b5975\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.454273 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcr8k\" (UniqueName: \"kubernetes.io/projected/757421ca-4bfc-42d6-ae73-bb479f3b5975-kube-api-access-xcr8k\") pod \"757421ca-4bfc-42d6-ae73-bb479f3b5975\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.454338 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-combined-ca-bundle\") pod \"757421ca-4bfc-42d6-ae73-bb479f3b5975\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.454379 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/757421ca-4bfc-42d6-ae73-bb479f3b5975-logs\") pod \"757421ca-4bfc-42d6-ae73-bb479f3b5975\" (UID: \"757421ca-4bfc-42d6-ae73-bb479f3b5975\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.454419 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnk24\" (UniqueName: \"kubernetes.io/projected/2f906c05-1123-4e20-9792-a4930e299212-kube-api-access-tnk24\") pod \"2f906c05-1123-4e20-9792-a4930e299212\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.454456 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-db-sync-config-data\") pod \"2f906c05-1123-4e20-9792-a4930e299212\" (UID: \"2f906c05-1123-4e20-9792-a4930e299212\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.456001 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/757421ca-4bfc-42d6-ae73-bb479f3b5975-logs" (OuterVolumeSpecName: "logs") pod "757421ca-4bfc-42d6-ae73-bb479f3b5975" (UID: "757421ca-4bfc-42d6-ae73-bb479f3b5975"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.463874 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757421ca-4bfc-42d6-ae73-bb479f3b5975-kube-api-access-xcr8k" (OuterVolumeSpecName: "kube-api-access-xcr8k") pod "757421ca-4bfc-42d6-ae73-bb479f3b5975" (UID: "757421ca-4bfc-42d6-ae73-bb479f3b5975"). InnerVolumeSpecName "kube-api-access-xcr8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.464019 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2f906c05-1123-4e20-9792-a4930e299212" (UID: "2f906c05-1123-4e20-9792-a4930e299212"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.464062 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f906c05-1123-4e20-9792-a4930e299212-kube-api-access-tnk24" (OuterVolumeSpecName: "kube-api-access-tnk24") pod "2f906c05-1123-4e20-9792-a4930e299212" (UID: "2f906c05-1123-4e20-9792-a4930e299212"). InnerVolumeSpecName "kube-api-access-tnk24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.466359 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-scripts" (OuterVolumeSpecName: "scripts") pod "757421ca-4bfc-42d6-ae73-bb479f3b5975" (UID: "757421ca-4bfc-42d6-ae73-bb479f3b5975"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.510078 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "757421ca-4bfc-42d6-ae73-bb479f3b5975" (UID: "757421ca-4bfc-42d6-ae73-bb479f3b5975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.515535 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-config-data" (OuterVolumeSpecName: "config-data") pod "757421ca-4bfc-42d6-ae73-bb479f3b5975" (UID: "757421ca-4bfc-42d6-ae73-bb479f3b5975"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.530335 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f906c05-1123-4e20-9792-a4930e299212" (UID: "2f906c05-1123-4e20-9792-a4930e299212"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556393 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk6qp\" (UniqueName: \"kubernetes.io/projected/ae6b2577-67ac-4c20-8242-e9533bab124d-kube-api-access-qk6qp\") pod \"ae6b2577-67ac-4c20-8242-e9533bab124d\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556457 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2fms\" (UniqueName: \"kubernetes.io/projected/a6551271-9859-4f43-8a89-5c5da823ef56-kube-api-access-f2fms\") pod \"a6551271-9859-4f43-8a89-5c5da823ef56\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556573 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-config-data\") pod \"a6551271-9859-4f43-8a89-5c5da823ef56\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556590 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-combined-ca-bundle\") pod \"a6551271-9859-4f43-8a89-5c5da823ef56\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556649 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-combined-ca-bundle\") pod \"ae6b2577-67ac-4c20-8242-e9533bab124d\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556692 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-config-data\") pod \"ae6b2577-67ac-4c20-8242-e9533bab124d\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556712 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-fernet-keys\") pod \"ae6b2577-67ac-4c20-8242-e9533bab124d\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556740 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-scripts\") pod \"ae6b2577-67ac-4c20-8242-e9533bab124d\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556779 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-credential-keys\") pod \"ae6b2577-67ac-4c20-8242-e9533bab124d\" (UID: \"ae6b2577-67ac-4c20-8242-e9533bab124d\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.556812 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6551271-9859-4f43-8a89-5c5da823ef56-logs\") pod \"a6551271-9859-4f43-8a89-5c5da823ef56\" (UID: \"a6551271-9859-4f43-8a89-5c5da823ef56\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557156 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557167 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557177 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557185 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcr8k\" (UniqueName: \"kubernetes.io/projected/757421ca-4bfc-42d6-ae73-bb479f3b5975-kube-api-access-xcr8k\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557193 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757421ca-4bfc-42d6-ae73-bb479f3b5975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557221 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/757421ca-4bfc-42d6-ae73-bb479f3b5975-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557230 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnk24\" (UniqueName: \"kubernetes.io/projected/2f906c05-1123-4e20-9792-a4930e299212-kube-api-access-tnk24\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557238 5007 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f906c05-1123-4e20-9792-a4930e299212-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.557484 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6551271-9859-4f43-8a89-5c5da823ef56-logs" (OuterVolumeSpecName: "logs") pod "a6551271-9859-4f43-8a89-5c5da823ef56" (UID: "a6551271-9859-4f43-8a89-5c5da823ef56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.560765 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6b2577-67ac-4c20-8242-e9533bab124d-kube-api-access-qk6qp" (OuterVolumeSpecName: "kube-api-access-qk6qp") pod "ae6b2577-67ac-4c20-8242-e9533bab124d" (UID: "ae6b2577-67ac-4c20-8242-e9533bab124d"). InnerVolumeSpecName "kube-api-access-qk6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.565409 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6551271-9859-4f43-8a89-5c5da823ef56-kube-api-access-f2fms" (OuterVolumeSpecName: "kube-api-access-f2fms") pod "a6551271-9859-4f43-8a89-5c5da823ef56" (UID: "a6551271-9859-4f43-8a89-5c5da823ef56"). InnerVolumeSpecName "kube-api-access-f2fms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.576101 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ae6b2577-67ac-4c20-8242-e9533bab124d" (UID: "ae6b2577-67ac-4c20-8242-e9533bab124d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.579526 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-scripts" (OuterVolumeSpecName: "scripts") pod "ae6b2577-67ac-4c20-8242-e9533bab124d" (UID: "ae6b2577-67ac-4c20-8242-e9533bab124d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.579589 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ae6b2577-67ac-4c20-8242-e9533bab124d" (UID: "ae6b2577-67ac-4c20-8242-e9533bab124d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.604149 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-config-data" (OuterVolumeSpecName: "config-data") pod "ae6b2577-67ac-4c20-8242-e9533bab124d" (UID: "ae6b2577-67ac-4c20-8242-e9533bab124d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.604611 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae6b2577-67ac-4c20-8242-e9533bab124d" (UID: "ae6b2577-67ac-4c20-8242-e9533bab124d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.611883 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qz65k" event={"ID":"2f906c05-1123-4e20-9792-a4930e299212","Type":"ContainerDied","Data":"d38c8194881bf7338e6a3a6dda825c750a43ec6ba40d63386d12e0b081092b4b"} Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.611920 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d38c8194881bf7338e6a3a6dda825c750a43ec6ba40d63386d12e0b081092b4b" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.611994 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qz65k" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.622235 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"a6551271-9859-4f43-8a89-5c5da823ef56","Type":"ContainerDied","Data":"ec7eadc2e8573af333a973e0ee4aebd80e5a10de4854279d0350118122f7ae43"} Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.622303 5007 scope.go:117] "RemoveContainer" containerID="f09978b165e7e0000eef8d7470dc774c8cbec670dad38a59094d91bd98f5b22f" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.622254 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.636042 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zljlc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.636656 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zljlc" event={"ID":"ae6b2577-67ac-4c20-8242-e9533bab124d","Type":"ContainerDied","Data":"25f2034a613945504b5994821c789ecdacccda4cc46cb711e99af9563e732559"} Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.636709 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f2034a613945504b5994821c789ecdacccda4cc46cb711e99af9563e732559" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.640237 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6551271-9859-4f43-8a89-5c5da823ef56" (UID: "a6551271-9859-4f43-8a89-5c5da823ef56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.646472 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af8794a0-c909-4418-ab5a-a4d89492f37f","Type":"ContainerStarted","Data":"650df3da70c3bec49b2c1ec9be1085770e2df8f56d43164e092d221fcad71d42"} Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.649566 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-spkvd" event={"ID":"757421ca-4bfc-42d6-ae73-bb479f3b5975","Type":"ContainerDied","Data":"9cd83df916d197ce5e61bd4b073c9c7b72073990ae0fc697549a3c6768261b16"} Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.649590 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd83df916d197ce5e61bd4b073c9c7b72073990ae0fc697549a3c6768261b16" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.649651 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-spkvd" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.650528 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659348 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659372 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659381 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659389 5007 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659397 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659406 5007 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae6b2577-67ac-4c20-8242-e9533bab124d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659414 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6551271-9859-4f43-8a89-5c5da823ef56-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659422 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk6qp\" (UniqueName: \"kubernetes.io/projected/ae6b2577-67ac-4c20-8242-e9533bab124d-kube-api-access-qk6qp\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659449 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2fms\" (UniqueName: \"kubernetes.io/projected/a6551271-9859-4f43-8a89-5c5da823ef56-kube-api-access-f2fms\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.659909 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerStarted","Data":"fc90edc84d1b4a0b247a94fbe61aaefeb37cfffbaad11795df3894ac691a8cef"} Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.725850 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.755780 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-config-data" (OuterVolumeSpecName: "config-data") pod "a6551271-9859-4f43-8a89-5c5da823ef56" (UID: "a6551271-9859-4f43-8a89-5c5da823ef56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.770201 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-custom-prometheus-ca\") pod \"a6612742-0c85-4677-9495-0e93227acfd6\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.770523 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-config-data\") pod \"a6612742-0c85-4677-9495-0e93227acfd6\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.770666 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6612742-0c85-4677-9495-0e93227acfd6-logs\") pod \"a6612742-0c85-4677-9495-0e93227acfd6\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.770887 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-combined-ca-bundle\") pod \"a6612742-0c85-4677-9495-0e93227acfd6\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.771017 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wc5v\" (UniqueName: \"kubernetes.io/projected/a6612742-0c85-4677-9495-0e93227acfd6-kube-api-access-6wc5v\") pod \"a6612742-0c85-4677-9495-0e93227acfd6\" (UID: \"a6612742-0c85-4677-9495-0e93227acfd6\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.771939 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6551271-9859-4f43-8a89-5c5da823ef56-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.772079 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6612742-0c85-4677-9495-0e93227acfd6-logs" (OuterVolumeSpecName: "logs") pod "a6612742-0c85-4677-9495-0e93227acfd6" (UID: "a6612742-0c85-4677-9495-0e93227acfd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.776382 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6988d9b8b6-5qsnc"] Feb 18 19:57:10 crc kubenswrapper[5007]: E0218 19:57:10.776872 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f906c05-1123-4e20-9792-a4930e299212" containerName="barbican-db-sync" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.776885 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f906c05-1123-4e20-9792-a4930e299212" containerName="barbican-db-sync" Feb 18 19:57:10 crc kubenswrapper[5007]: E0218 19:57:10.776907 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6551271-9859-4f43-8a89-5c5da823ef56" containerName="watcher-applier" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.776915 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6551271-9859-4f43-8a89-5c5da823ef56" containerName="watcher-applier" Feb 18 19:57:10 crc kubenswrapper[5007]: E0218 19:57:10.776934 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerName="init" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.776940 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerName="init" Feb 18 19:57:10 crc kubenswrapper[5007]: E0218 19:57:10.776953 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerName="dnsmasq-dns" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.776959 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerName="dnsmasq-dns" Feb 18 19:57:10 crc kubenswrapper[5007]: E0218 19:57:10.776978 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6b2577-67ac-4c20-8242-e9533bab124d" containerName="keystone-bootstrap" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.776985 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6b2577-67ac-4c20-8242-e9533bab124d" containerName="keystone-bootstrap" Feb 18 19:57:10 crc kubenswrapper[5007]: E0218 19:57:10.776996 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api-log" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.777005 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api-log" Feb 18 19:57:10 crc kubenswrapper[5007]: E0218 19:57:10.777019 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757421ca-4bfc-42d6-ae73-bb479f3b5975" containerName="placement-db-sync" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.777024 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="757421ca-4bfc-42d6-ae73-bb479f3b5975" containerName="placement-db-sync" Feb 18 19:57:10 crc kubenswrapper[5007]: E0218 19:57:10.777033 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.777039 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.778045 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6b2577-67ac-4c20-8242-e9533bab124d" containerName="keystone-bootstrap" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.778084 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.778094 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6612742-0c85-4677-9495-0e93227acfd6" containerName="watcher-api-log" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.778111 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6551271-9859-4f43-8a89-5c5da823ef56" containerName="watcher-applier" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.778121 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" containerName="dnsmasq-dns" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.778129 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f906c05-1123-4e20-9792-a4930e299212" containerName="barbican-db-sync" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.778145 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="757421ca-4bfc-42d6-ae73-bb479f3b5975" containerName="placement-db-sync" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.778193 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6612742-0c85-4677-9495-0e93227acfd6-kube-api-access-6wc5v" (OuterVolumeSpecName: "kube-api-access-6wc5v") pod "a6612742-0c85-4677-9495-0e93227acfd6" (UID: "a6612742-0c85-4677-9495-0e93227acfd6"). InnerVolumeSpecName "kube-api-access-6wc5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.779228 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.784001 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.784275 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.784499 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.784790 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.785228 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-x89jd" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.801798 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6988d9b8b6-5qsnc"] Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.804604 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a6612742-0c85-4677-9495-0e93227acfd6" (UID: "a6612742-0c85-4677-9495-0e93227acfd6"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.822380 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6612742-0c85-4677-9495-0e93227acfd6" (UID: "a6612742-0c85-4677-9495-0e93227acfd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.844637 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-config-data" (OuterVolumeSpecName: "config-data") pod "a6612742-0c85-4677-9495-0e93227acfd6" (UID: "a6612742-0c85-4677-9495-0e93227acfd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.873299 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-config\") pod \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.873360 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hngxj\" (UniqueName: \"kubernetes.io/projected/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-kube-api-access-hngxj\") pod \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.873514 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-nb\") pod \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.873568 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-sb\") pod \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.873737 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-swift-storage-0\") pod \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.873769 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-svc\") pod \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\" (UID: \"f87fc392-f98a-45e5-8f18-c3efbdf19e1a\") " Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.874002 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-config-data\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.874031 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6k4\" (UniqueName: \"kubernetes.io/projected/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-kube-api-access-jh6k4\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.874048 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-logs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.874558 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-internal-tls-certs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.874634 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-combined-ca-bundle\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.874692 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-scripts\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.875027 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-public-tls-certs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.875308 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.875327 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wc5v\" (UniqueName: \"kubernetes.io/projected/a6612742-0c85-4677-9495-0e93227acfd6-kube-api-access-6wc5v\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.875341 5007 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.875354 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6612742-0c85-4677-9495-0e93227acfd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.875366 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6612742-0c85-4677-9495-0e93227acfd6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.876546 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-kube-api-access-hngxj" (OuterVolumeSpecName: "kube-api-access-hngxj") pod "f87fc392-f98a-45e5-8f18-c3efbdf19e1a" (UID: "f87fc392-f98a-45e5-8f18-c3efbdf19e1a"). InnerVolumeSpecName "kube-api-access-hngxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.926108 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f87fc392-f98a-45e5-8f18-c3efbdf19e1a" (UID: "f87fc392-f98a-45e5-8f18-c3efbdf19e1a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.937987 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f87fc392-f98a-45e5-8f18-c3efbdf19e1a" (UID: "f87fc392-f98a-45e5-8f18-c3efbdf19e1a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.938047 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f87fc392-f98a-45e5-8f18-c3efbdf19e1a" (UID: "f87fc392-f98a-45e5-8f18-c3efbdf19e1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.954378 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-config" (OuterVolumeSpecName: "config") pod "f87fc392-f98a-45e5-8f18-c3efbdf19e1a" (UID: "f87fc392-f98a-45e5-8f18-c3efbdf19e1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.955034 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f87fc392-f98a-45e5-8f18-c3efbdf19e1a" (UID: "f87fc392-f98a-45e5-8f18-c3efbdf19e1a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.963755 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977563 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-scripts\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977683 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-public-tls-certs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977724 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-config-data\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977768 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6k4\" (UniqueName: \"kubernetes.io/projected/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-kube-api-access-jh6k4\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977786 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-logs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977831 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-internal-tls-certs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977883 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-combined-ca-bundle\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977939 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977950 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.977958 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.978825 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-logs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.979603 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hngxj\" (UniqueName: \"kubernetes.io/projected/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-kube-api-access-hngxj\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.979622 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.979631 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87fc392-f98a-45e5-8f18-c3efbdf19e1a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.979791 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.983218 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-internal-tls-certs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.983936 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-scripts\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.983966 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-config-data\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.984576 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-public-tls-certs\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.990228 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-combined-ca-bundle\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:10 crc kubenswrapper[5007]: I0218 19:57:10.996579 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6k4\" (UniqueName: \"kubernetes.io/projected/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-kube-api-access-jh6k4\") pod \"placement-6988d9b8b6-5qsnc\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.002900 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.005254 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.007166 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.011551 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.096947 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.183345 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.183405 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-config-data\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.183427 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/232fe30c-bc2a-4490-a02d-edfdf60baceb-logs\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.183524 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74m6b\" (UniqueName: \"kubernetes.io/projected/232fe30c-bc2a-4490-a02d-edfdf60baceb-kube-api-access-74m6b\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.284830 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74m6b\" (UniqueName: \"kubernetes.io/projected/232fe30c-bc2a-4490-a02d-edfdf60baceb-kube-api-access-74m6b\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.285176 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.285218 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-config-data\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.285237 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/232fe30c-bc2a-4490-a02d-edfdf60baceb-logs\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.285653 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/232fe30c-bc2a-4490-a02d-edfdf60baceb-logs\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.289793 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.302580 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-config-data\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.306923 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74m6b\" (UniqueName: \"kubernetes.io/projected/232fe30c-bc2a-4490-a02d-edfdf60baceb-kube-api-access-74m6b\") pod \"watcher-applier-0\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.476624 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-57dbf48d4-s2z7s"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.477783 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.482500 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.482654 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.482670 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sd9ml" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.482750 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.487908 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.488471 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.497573 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57dbf48d4-s2z7s"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.506991 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.590623 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-credential-keys\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.590678 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-scripts\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.590717 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-fernet-keys\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.590742 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjmh\" (UniqueName: \"kubernetes.io/projected/71cbbd60-c1a2-4638-83a1-4131bc64a2de-kube-api-access-hrjmh\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.590764 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-combined-ca-bundle\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.590828 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-internal-tls-certs\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.590860 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-config-data\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.590910 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-public-tls-certs\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.611840 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fdb8776f-pxvts"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.613467 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.628404 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.628852 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tgb24" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.638316 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.662505 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-76c66f4494-fz2df"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.664286 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.674906 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fdb8776f-pxvts"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.687916 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.689329 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6988d9b8b6-5qsnc"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.694467 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-scripts\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.694550 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-fernet-keys\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.694592 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrjmh\" (UniqueName: \"kubernetes.io/projected/71cbbd60-c1a2-4638-83a1-4131bc64a2de-kube-api-access-hrjmh\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.694609 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-combined-ca-bundle\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.694721 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-internal-tls-certs\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.694768 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-config-data\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.694867 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-public-tls-certs\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.694904 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-credential-keys\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.708350 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-credential-keys\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.741135 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-combined-ca-bundle\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.752163 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-config-data\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.757036 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-scripts\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.757567 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-internal-tls-certs\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.767554 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76c66f4494-fz2df"] Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.773006 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w57rb" event={"ID":"c60b1e07-e980-450b-bfb8-c0210edeae7d","Type":"ContainerStarted","Data":"5d64228614eef24af8844ba7bf031ea7c91d80443ad2383f876552e4df811ee5"} Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.810625 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.810698 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-combined-ca-bundle\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.810932 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.811059 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e889c2-3cd7-448d-8115-0f1cf3d6320a-logs\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.811174 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-logs\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.811210 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9tjx\" (UniqueName: \"kubernetes.io/projected/41e889c2-3cd7-448d-8115-0f1cf3d6320a-kube-api-access-c9tjx\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.811284 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data-custom\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.811323 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data-custom\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.811339 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-combined-ca-bundle\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.811446 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgsvn\" (UniqueName: \"kubernetes.io/projected/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-kube-api-access-pgsvn\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.828774 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-fernet-keys\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.831579 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71cbbd60-c1a2-4638-83a1-4131bc64a2de-public-tls-certs\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.832680 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrjmh\" (UniqueName: \"kubernetes.io/projected/71cbbd60-c1a2-4638-83a1-4131bc64a2de-kube-api-access-hrjmh\") pod \"keystone-57dbf48d4-s2z7s\" (UID: \"71cbbd60-c1a2-4638-83a1-4131bc64a2de\") " pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.846314 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" event={"ID":"f87fc392-f98a-45e5-8f18-c3efbdf19e1a","Type":"ContainerDied","Data":"e53dda0f0fc8f8f63490a7b8167319b855ae29ddc6ac455800737d5f36018e5b"} Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.846351 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c64d6b7c-xfb47" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.846396 5007 scope.go:117] "RemoveContainer" containerID="e003b9b54791a8b096007fc8dd05ff4f0c648357d5cfe3bdccbd884cd1bdc921" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.909083 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.928008 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.928735 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e889c2-3cd7-448d-8115-0f1cf3d6320a-logs\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.928875 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-logs\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.928920 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9tjx\" (UniqueName: \"kubernetes.io/projected/41e889c2-3cd7-448d-8115-0f1cf3d6320a-kube-api-access-c9tjx\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.928991 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data-custom\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.929018 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-combined-ca-bundle\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.929037 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data-custom\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.929130 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgsvn\" (UniqueName: \"kubernetes.io/projected/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-kube-api-access-pgsvn\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.929224 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.929252 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-combined-ca-bundle\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.932106 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e889c2-3cd7-448d-8115-0f1cf3d6320a-logs\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.935129 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-combined-ca-bundle\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.947558 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-logs\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.949402 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-w57rb" podStartSLOduration=4.358823784 podStartE2EDuration="54.949385409s" podCreationTimestamp="2026-02-18 19:56:17 +0000 UTC" firstStartedPulling="2026-02-18 19:56:19.58896488 +0000 UTC m=+1064.371084404" lastFinishedPulling="2026-02-18 19:57:10.179526505 +0000 UTC m=+1114.961646029" observedRunningTime="2026-02-18 19:57:11.809369769 +0000 UTC m=+1116.591489303" watchObservedRunningTime="2026-02-18 19:57:11.949385409 +0000 UTC m=+1116.731504933" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.979643 5007 scope.go:117] "RemoveContainer" containerID="ff2e81c9fdf1c045532e6e08cb2aad57211eaa2b740975edfd742066085da8ee" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.981006 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data-custom\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.982366 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data-custom\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:11 crc kubenswrapper[5007]: I0218 19:57:11.992474 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-combined-ca-bundle\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.005472 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.047115 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgsvn\" (UniqueName: \"kubernetes.io/projected/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-kube-api-access-pgsvn\") pod \"barbican-keystone-listener-76c66f4494-fz2df\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.050013 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9tjx\" (UniqueName: \"kubernetes.io/projected/41e889c2-3cd7-448d-8115-0f1cf3d6320a-kube-api-access-c9tjx\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.064496 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data\") pod \"barbican-worker-5fdb8776f-pxvts\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.123183 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.155814 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6551271-9859-4f43-8a89-5c5da823ef56" path="/var/lib/kubelet/pods/a6551271-9859-4f43-8a89-5c5da823ef56/volumes" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.181853 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a6612742-0c85-4677-9495-0e93227acfd6","Type":"ContainerDied","Data":"206b1e3d418574863021802f67e3d66e6c0e6110435585fce9a928a1b6419270"} Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.181983 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5487fdf-pt29p"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.184788 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5487fdf-pt29p"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.184823 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-548d77d6b8-6hqpj"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.185404 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.186696 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-548d77d6b8-6hqpj"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.186768 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74c64d6b7c-xfb47"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.186839 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74c64d6b7c-xfb47"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.186974 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5647fc5d54-8f4nk"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.188181 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.188424 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.199588 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.243907 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.243910 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbmt\" (UniqueName: \"kubernetes.io/projected/cb6286b0-40e6-4ca8-8750-25e256561c3a-kube-api-access-qhbmt\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.244284 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-combined-ca-bundle\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.244365 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.244487 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.244554 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pnrf\" (UniqueName: \"kubernetes.io/projected/db8a2bd8-30be-415b-a749-985a8539462c-kube-api-access-7pnrf\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.244636 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data-custom\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.244703 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7rbt\" (UniqueName: \"kubernetes.io/projected/57a8c421-0faa-444c-a2fc-00d5d7485e79-kube-api-access-q7rbt\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.244802 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-svc\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.244875 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.248476 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-config\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.248593 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-config-data-custom\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.248667 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-config-data\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.248761 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db8a2bd8-30be-415b-a749-985a8539462c-logs\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.248833 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-swift-storage-0\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.248914 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a8c421-0faa-444c-a2fc-00d5d7485e79-logs\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.249033 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-combined-ca-bundle\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.245377 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-64ccd4d69f-6ps7j"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.250985 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.261667 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5647fc5d54-8f4nk"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.268811 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64ccd4d69f-6ps7j"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.278989 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.308280 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.309687 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.310092 5007 scope.go:117] "RemoveContainer" containerID="9b88bcf24cf06997838cb08406b0c296187409e5f8b2b2dd44ff61e4b2ae03c4" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.316182 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cf99f7bdb-5kdlj"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.318369 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.352502 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-combined-ca-bundle\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353267 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353346 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353366 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pnrf\" (UniqueName: \"kubernetes.io/projected/db8a2bd8-30be-415b-a749-985a8539462c-kube-api-access-7pnrf\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353411 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data-custom\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353456 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7rbt\" (UniqueName: \"kubernetes.io/projected/57a8c421-0faa-444c-a2fc-00d5d7485e79-kube-api-access-q7rbt\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353506 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-svc\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353523 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353545 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-config\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353607 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data-custom\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353639 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-config-data-custom\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353666 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-config-data\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353683 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-config-data\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353708 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-combined-ca-bundle\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353736 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db8a2bd8-30be-415b-a749-985a8539462c-logs\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353752 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-logs\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353774 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-swift-storage-0\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353804 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a8c421-0faa-444c-a2fc-00d5d7485e79-logs\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353940 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-combined-ca-bundle\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.353974 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-combined-ca-bundle\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.354010 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m79p7\" (UniqueName: \"kubernetes.io/projected/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-kube-api-access-m79p7\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.354054 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-logs\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.354076 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-config-data-custom\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.354092 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.354192 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbmt\" (UniqueName: \"kubernetes.io/projected/cb6286b0-40e6-4ca8-8750-25e256561c3a-kube-api-access-qhbmt\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.354231 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n69mm\" (UniqueName: \"kubernetes.io/projected/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-kube-api-access-n69mm\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.355969 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db8a2bd8-30be-415b-a749-985a8539462c-logs\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.356076 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.360684 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a8c421-0faa-444c-a2fc-00d5d7485e79-logs\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.361290 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.361886 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-config\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.361939 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.361956 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-svc\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.362844 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-swift-storage-0\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.364364 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data-custom\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.367932 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-combined-ca-bundle\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.370656 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-combined-ca-bundle\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.373251 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-config-data\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.375121 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cf99f7bdb-5kdlj"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.375972 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a8c421-0faa-444c-a2fc-00d5d7485e79-config-data-custom\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.379810 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pnrf\" (UniqueName: \"kubernetes.io/projected/db8a2bd8-30be-415b-a749-985a8539462c-kube-api-access-7pnrf\") pod \"barbican-api-548d77d6b8-6hqpj\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.386609 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7rbt\" (UniqueName: \"kubernetes.io/projected/57a8c421-0faa-444c-a2fc-00d5d7485e79-kube-api-access-q7rbt\") pod \"barbican-keystone-listener-5647fc5d54-8f4nk\" (UID: \"57a8c421-0faa-444c-a2fc-00d5d7485e79\") " pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.389382 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbmt\" (UniqueName: \"kubernetes.io/projected/cb6286b0-40e6-4ca8-8750-25e256561c3a-kube-api-access-qhbmt\") pod \"dnsmasq-dns-76b5487fdf-pt29p\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.397672 5007 scope.go:117] "RemoveContainer" containerID="d4765d90ee3f0ec098c71741af727d967f71cab1abf6cb33a30b82cdefc3c0da" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.400851 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.402871 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.405069 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.405267 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.417191 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.417576 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.422982 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.455568 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.455819 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data-custom\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.455872 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-config-data\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.455892 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-combined-ca-bundle\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.455919 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbskm\" (UniqueName: \"kubernetes.io/projected/627b8aa5-8116-4367-b84f-2ea2c26a93b0-kube-api-access-rbskm\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.455946 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-logs\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456022 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-combined-ca-bundle\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456046 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m79p7\" (UniqueName: \"kubernetes.io/projected/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-kube-api-access-m79p7\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456063 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456086 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-logs\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456107 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-config-data-custom\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456121 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456140 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-public-tls-certs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456179 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n69mm\" (UniqueName: \"kubernetes.io/projected/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-kube-api-access-n69mm\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456201 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/627b8aa5-8116-4367-b84f-2ea2c26a93b0-logs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456224 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-config-data\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456339 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.456724 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-logs\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.459199 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-logs\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.461711 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data-custom\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.462583 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.463729 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-config-data\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.464938 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-config-data-custom\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.469127 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-combined-ca-bundle\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.470667 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-combined-ca-bundle\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.490928 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m79p7\" (UniqueName: \"kubernetes.io/projected/c50229c8-a34b-4a43-9e27-2cb3bf9100f1-kube-api-access-m79p7\") pod \"barbican-worker-64ccd4d69f-6ps7j\" (UID: \"c50229c8-a34b-4a43-9e27-2cb3bf9100f1\") " pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.497027 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n69mm\" (UniqueName: \"kubernetes.io/projected/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-kube-api-access-n69mm\") pod \"barbican-api-5cf99f7bdb-5kdlj\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.559398 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.559488 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-public-tls-certs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.559842 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/627b8aa5-8116-4367-b84f-2ea2c26a93b0-logs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.559888 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-config-data\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.559965 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.560010 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.560981 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbskm\" (UniqueName: \"kubernetes.io/projected/627b8aa5-8116-4367-b84f-2ea2c26a93b0-kube-api-access-rbskm\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.561865 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/627b8aa5-8116-4367-b84f-2ea2c26a93b0-logs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.565177 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.570931 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-config-data\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.571513 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.575122 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.575839 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-public-tls-certs\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.615531 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbskm\" (UniqueName: \"kubernetes.io/projected/627b8aa5-8116-4367-b84f-2ea2c26a93b0-kube-api-access-rbskm\") pod \"watcher-api-0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.616722 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.649946 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.671015 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.747127 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64ccd4d69f-6ps7j" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.757155 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.790111 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.960975 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"232fe30c-bc2a-4490-a02d-edfdf60baceb","Type":"ContainerStarted","Data":"d7bc6709977a9217e8259125ef4077ab47610f04ed2ea7c5c733be7f018db7c9"} Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.966052 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76c66f4494-fz2df"] Feb 18 19:57:12 crc kubenswrapper[5007]: I0218 19:57:12.979594 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57dbf48d4-s2z7s"] Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.004141 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6988d9b8b6-5qsnc" event={"ID":"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b","Type":"ContainerStarted","Data":"4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605"} Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.004191 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6988d9b8b6-5qsnc" event={"ID":"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b","Type":"ContainerStarted","Data":"08b1cd264a07fbc1a494fd040d0df841645b2a8d429ad2f38e9a62d492d09b2b"} Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.004648 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.004695 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.006683 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-548d77d6b8-6hqpj"] Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.046522 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6988d9b8b6-5qsnc" podStartSLOduration=3.046501653 podStartE2EDuration="3.046501653s" podCreationTimestamp="2026-02-18 19:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:13.029675471 +0000 UTC m=+1117.811795005" watchObservedRunningTime="2026-02-18 19:57:13.046501653 +0000 UTC m=+1117.828621177" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.115354 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fdb8776f-pxvts"] Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.321423 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66bd9779d-c6rl5"] Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.323445 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.377484 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66bd9779d-c6rl5"] Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.401928 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5487fdf-pt29p"] Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.407934 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-logs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.407992 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-internal-tls-certs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.408039 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-combined-ca-bundle\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.408061 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-scripts\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.408109 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvkkv\" (UniqueName: \"kubernetes.io/projected/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-kube-api-access-pvkkv\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.408136 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-config-data\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.408170 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-public-tls-certs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.510282 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-config-data\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.510583 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-public-tls-certs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.510690 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-logs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.510727 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-internal-tls-certs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.510795 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-combined-ca-bundle\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.510816 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-scripts\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.510889 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvkkv\" (UniqueName: \"kubernetes.io/projected/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-kube-api-access-pvkkv\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.513272 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-logs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.520195 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-internal-tls-certs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.521742 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-scripts\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.522167 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-combined-ca-bundle\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.525015 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-config-data\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.527159 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-public-tls-certs\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.529042 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvkkv\" (UniqueName: \"kubernetes.io/projected/8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c-kube-api-access-pvkkv\") pod \"placement-66bd9779d-c6rl5\" (UID: \"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c\") " pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.629751 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5647fc5d54-8f4nk"] Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.798659 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.816967 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64ccd4d69f-6ps7j"] Feb 18 19:57:13 crc kubenswrapper[5007]: W0218 19:57:13.825412 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc50229c8_a34b_4a43_9e27_2cb3bf9100f1.slice/crio-74a1f3fb0982f4aadd30031ec17f5ec92f17b8b59497da1a766eaa790d01930e WatchSource:0}: Error finding container 74a1f3fb0982f4aadd30031ec17f5ec92f17b8b59497da1a766eaa790d01930e: Status 404 returned error can't find the container with id 74a1f3fb0982f4aadd30031ec17f5ec92f17b8b59497da1a766eaa790d01930e Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.840579 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:57:13 crc kubenswrapper[5007]: W0218 19:57:13.863552 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod627b8aa5_8116_4367_b84f_2ea2c26a93b0.slice/crio-2854fd392e3ac4927ebd3d6d2d1c7ad61a40313c1f2d1cfc1867f13227309eb2 WatchSource:0}: Error finding container 2854fd392e3ac4927ebd3d6d2d1c7ad61a40313c1f2d1cfc1867f13227309eb2: Status 404 returned error can't find the container with id 2854fd392e3ac4927ebd3d6d2d1c7ad61a40313c1f2d1cfc1867f13227309eb2 Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.879866 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cf99f7bdb-5kdlj"] Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.931231 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6612742-0c85-4677-9495-0e93227acfd6" path="/var/lib/kubelet/pods/a6612742-0c85-4677-9495-0e93227acfd6/volumes" Feb 18 19:57:13 crc kubenswrapper[5007]: I0218 19:57:13.931834 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f87fc392-f98a-45e5-8f18-c3efbdf19e1a" path="/var/lib/kubelet/pods/f87fc392-f98a-45e5-8f18-c3efbdf19e1a/volumes" Feb 18 19:57:13 crc kubenswrapper[5007]: W0218 19:57:13.942460 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e524dd1_5e4f_4baf_80fd_56b4e5724a63.slice/crio-eb8139f6319e504a54312e5aa1ab5de8ae9ccbc072ff4c4a64029a9c0914234f WatchSource:0}: Error finding container eb8139f6319e504a54312e5aa1ab5de8ae9ccbc072ff4c4a64029a9c0914234f: Status 404 returned error can't find the container with id eb8139f6319e504a54312e5aa1ab5de8ae9ccbc072ff4c4a64029a9c0914234f Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.027138 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" event={"ID":"8e524dd1-5e4f-4baf-80fd-56b4e5724a63","Type":"ContainerStarted","Data":"eb8139f6319e504a54312e5aa1ab5de8ae9ccbc072ff4c4a64029a9c0914234f"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.030576 5007 generic.go:334] "Generic (PLEG): container finished" podID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerID="839bec76ce9ccdf5ea4833f3aaa4cd9d7dfa5c29a8a5a00235648b6b4e62b7c2" exitCode=0 Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.031626 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" event={"ID":"cb6286b0-40e6-4ca8-8750-25e256561c3a","Type":"ContainerDied","Data":"839bec76ce9ccdf5ea4833f3aaa4cd9d7dfa5c29a8a5a00235648b6b4e62b7c2"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.031650 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" event={"ID":"cb6286b0-40e6-4ca8-8750-25e256561c3a","Type":"ContainerStarted","Data":"a9cfea4b13478ab4435d6d63fce99c70cff25042e36bd099f4cab6f355c766e0"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.039881 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" event={"ID":"57a8c421-0faa-444c-a2fc-00d5d7485e79","Type":"ContainerStarted","Data":"52e5468068f864954cf95e8fc6cfcab92c0438ac6026e9bc629d3e2227653d0b"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.042193 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57dbf48d4-s2z7s" event={"ID":"71cbbd60-c1a2-4638-83a1-4131bc64a2de","Type":"ContainerStarted","Data":"2e900846deee939de53f7509edf8a1388dc92b33555857b1a5dcb0769624b453"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.042245 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57dbf48d4-s2z7s" event={"ID":"71cbbd60-c1a2-4638-83a1-4131bc64a2de","Type":"ContainerStarted","Data":"b194bac53c28c4438c6cd2f7d12e968d330c0e1bf820289946079daf86240d0d"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.042322 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.085176 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6988d9b8b6-5qsnc" event={"ID":"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b","Type":"ContainerStarted","Data":"3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.133160 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fdb8776f-pxvts" event={"ID":"41e889c2-3cd7-448d-8115-0f1cf3d6320a","Type":"ContainerStarted","Data":"5fed0fd6cfe3534be7399de654efe795cd2a775b699a48863cb8b965301a1916"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.138094 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" event={"ID":"f737ff09-8d60-4d1c-b53b-6c3c3864ce67","Type":"ContainerStarted","Data":"cfd0521393246442c0aed12296e201145c5292f74fd356cdc7da5f004f0c7336"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.185886 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548d77d6b8-6hqpj" event={"ID":"db8a2bd8-30be-415b-a749-985a8539462c","Type":"ContainerStarted","Data":"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.185966 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548d77d6b8-6hqpj" event={"ID":"db8a2bd8-30be-415b-a749-985a8539462c","Type":"ContainerStarted","Data":"b734c771daa7576280e47a789dd7e7488e9756fe1c667919217cd6aa3533e6b3"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.189781 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"627b8aa5-8116-4367-b84f-2ea2c26a93b0","Type":"ContainerStarted","Data":"2854fd392e3ac4927ebd3d6d2d1c7ad61a40313c1f2d1cfc1867f13227309eb2"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.216670 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-57dbf48d4-s2z7s" podStartSLOduration=3.216649579 podStartE2EDuration="3.216649579s" podCreationTimestamp="2026-02-18 19:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:14.100374809 +0000 UTC m=+1118.882494333" watchObservedRunningTime="2026-02-18 19:57:14.216649579 +0000 UTC m=+1118.998769103" Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.235125 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"232fe30c-bc2a-4490-a02d-edfdf60baceb","Type":"ContainerStarted","Data":"16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.243960 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64ccd4d69f-6ps7j" event={"ID":"c50229c8-a34b-4a43-9e27-2cb3bf9100f1","Type":"ContainerStarted","Data":"74a1f3fb0982f4aadd30031ec17f5ec92f17b8b59497da1a766eaa790d01930e"} Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.260009 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.259987571 podStartE2EDuration="4.259987571s" podCreationTimestamp="2026-02-18 19:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:14.252751313 +0000 UTC m=+1119.034870847" watchObservedRunningTime="2026-02-18 19:57:14.259987571 +0000 UTC m=+1119.042107095" Feb 18 19:57:14 crc kubenswrapper[5007]: I0218 19:57:14.475710 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66bd9779d-c6rl5"] Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.254350 5007 generic.go:334] "Generic (PLEG): container finished" podID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerID="fc90edc84d1b4a0b247a94fbe61aaefeb37cfffbaad11795df3894ac691a8cef" exitCode=1 Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.254855 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerDied","Data":"fc90edc84d1b4a0b247a94fbe61aaefeb37cfffbaad11795df3894ac691a8cef"} Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.254888 5007 scope.go:117] "RemoveContainer" containerID="53bba64338fdec42ba5d3b6fe3fa5d56b1967880c71e3a466c09f3e30962f70b" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.256526 5007 scope.go:117] "RemoveContainer" containerID="fc90edc84d1b4a0b247a94fbe61aaefeb37cfffbaad11795df3894ac691a8cef" Feb 18 19:57:15 crc kubenswrapper[5007]: E0218 19:57:15.256753 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2d7e3b6c-20a8-4901-8cb2-b856b9e9b617)\"" pod="openstack/watcher-decision-engine-0" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.260109 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66bd9779d-c6rl5" event={"ID":"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c","Type":"ContainerStarted","Data":"b113d200eddae1a64b166d2d4eaa95b32f1358ddbcddbff5d5e707dfa870bb91"} Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.262822 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548d77d6b8-6hqpj" event={"ID":"db8a2bd8-30be-415b-a749-985a8539462c","Type":"ContainerStarted","Data":"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db"} Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.263132 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.263162 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.268476 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" event={"ID":"8e524dd1-5e4f-4baf-80fd-56b4e5724a63","Type":"ContainerStarted","Data":"9c7e4237a51292f61a672df7abb7d7cfbf610914c90c30ab352313745a46ba00"} Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.272669 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" event={"ID":"cb6286b0-40e6-4ca8-8750-25e256561c3a","Type":"ContainerStarted","Data":"7c2ba9724d8802f4f3e9c0adcc8150b68f4af86f25a5460ea4407b0f4816d740"} Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.272888 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.285685 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"627b8aa5-8116-4367-b84f-2ea2c26a93b0","Type":"ContainerStarted","Data":"5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06"} Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.301079 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-548d77d6b8-6hqpj" podStartSLOduration=4.30105784 podStartE2EDuration="4.30105784s" podCreationTimestamp="2026-02-18 19:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:15.300415541 +0000 UTC m=+1120.082535075" watchObservedRunningTime="2026-02-18 19:57:15.30105784 +0000 UTC m=+1120.083177364" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.323778 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" podStartSLOduration=4.32375743 podStartE2EDuration="4.32375743s" podCreationTimestamp="2026-02-18 19:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:15.316050979 +0000 UTC m=+1120.098170533" watchObservedRunningTime="2026-02-18 19:57:15.32375743 +0000 UTC m=+1120.105876954" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.665774 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.666322 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.761239 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-548d77d6b8-6hqpj"] Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.799765 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-646fd64f64-42wzb"] Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.801684 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.806268 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.806476 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.837577 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-646fd64f64-42wzb"] Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.921292 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-public-tls-certs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.921334 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-combined-ca-bundle\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.921360 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-config-data\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.921454 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv55z\" (UniqueName: \"kubernetes.io/projected/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-kube-api-access-hv55z\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.921918 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-config-data-custom\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.922087 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-internal-tls-certs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:15 crc kubenswrapper[5007]: I0218 19:57:15.922139 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-logs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.024954 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv55z\" (UniqueName: \"kubernetes.io/projected/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-kube-api-access-hv55z\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.025101 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-config-data-custom\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.025177 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-internal-tls-certs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.025207 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-logs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.025272 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-public-tls-certs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.025297 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-combined-ca-bundle\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.025332 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-config-data\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.026753 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-logs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.030402 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-config-data-custom\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.033870 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-public-tls-certs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.034848 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-config-data\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.042018 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-combined-ca-bundle\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.042422 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-internal-tls-certs\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.052694 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv55z\" (UniqueName: \"kubernetes.io/projected/a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28-kube-api-access-hv55z\") pod \"barbican-api-646fd64f64-42wzb\" (UID: \"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28\") " pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.145736 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.317766 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66bd9779d-c6rl5" event={"ID":"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c","Type":"ContainerStarted","Data":"00ae0e4f82c42fef7644618da1ccd97a72c890596f59bdf8c62495efeb90eb06"} Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.323666 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"627b8aa5-8116-4367-b84f-2ea2c26a93b0","Type":"ContainerStarted","Data":"16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc"} Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.354162 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.354135262 podStartE2EDuration="4.354135262s" podCreationTimestamp="2026-02-18 19:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:16.344363942 +0000 UTC m=+1121.126483486" watchObservedRunningTime="2026-02-18 19:57:16.354135262 +0000 UTC m=+1121.136254786" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.508528 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:57:16 crc kubenswrapper[5007]: I0218 19:57:16.966322 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-646fd64f64-42wzb"] Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.334593 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fdb8776f-pxvts" event={"ID":"41e889c2-3cd7-448d-8115-0f1cf3d6320a","Type":"ContainerStarted","Data":"c576ed74566e13d54c4e1f444f6822de09ed17225bb490a9659adb4bca2b462b"} Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.336558 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" event={"ID":"f737ff09-8d60-4d1c-b53b-6c3c3864ce67","Type":"ContainerStarted","Data":"7f3447ec62638ac73c1aa271a47cff9bc6f1a43baeda8365aeb8f29e5f658bbd"} Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.344748 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66bd9779d-c6rl5" event={"ID":"8e6f39b1-8db8-4cd3-aef5-ab45e71caa7c","Type":"ContainerStarted","Data":"a462fa3a9dc24becf8fe969be7463db0b9480e642354670740534c64afe95913"} Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.345054 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.345082 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.349126 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64ccd4d69f-6ps7j" event={"ID":"c50229c8-a34b-4a43-9e27-2cb3bf9100f1","Type":"ContainerStarted","Data":"9dc62c5e135b7cb7205764323a1e7387e4872cfe6fbefc44d24200380201ada2"} Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.352245 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" event={"ID":"8e524dd1-5e4f-4baf-80fd-56b4e5724a63","Type":"ContainerStarted","Data":"a68823f2a6faacffdbdb7a75a4632b3b886cf6c2ecd8136a7b40e4ac34d70398"} Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.352473 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.354315 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" event={"ID":"57a8c421-0faa-444c-a2fc-00d5d7485e79","Type":"ContainerStarted","Data":"e0b74d1ae682adc8723f69e2f0991e7563235396c209946b9e95b1c0450c2528"} Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.355974 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-646fd64f64-42wzb" event={"ID":"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28","Type":"ContainerStarted","Data":"db290a21f77648f664efbb91f8afac861be7757efde7aeaea8b1e1e147f8c915"} Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.356270 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-548d77d6b8-6hqpj" podUID="db8a2bd8-30be-415b-a749-985a8539462c" containerName="barbican-api-log" containerID="cri-o://01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17" gracePeriod=30 Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.356388 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-548d77d6b8-6hqpj" podUID="db8a2bd8-30be-415b-a749-985a8539462c" containerName="barbican-api" containerID="cri-o://5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db" gracePeriod=30 Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.356335 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.367362 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66bd9779d-c6rl5" podStartSLOduration=4.367345354 podStartE2EDuration="4.367345354s" podCreationTimestamp="2026-02-18 19:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:17.366058767 +0000 UTC m=+1122.148178301" watchObservedRunningTime="2026-02-18 19:57:17.367345354 +0000 UTC m=+1122.149464878" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.395621 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" podStartSLOduration=5.395597453 podStartE2EDuration="5.395597453s" podCreationTimestamp="2026-02-18 19:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:17.39411381 +0000 UTC m=+1122.176233334" watchObservedRunningTime="2026-02-18 19:57:17.395597453 +0000 UTC m=+1122.177716977" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.757592 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.791049 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.910460 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.933603 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.934164 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.934189 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.934371 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.935337 5007 scope.go:117] "RemoveContainer" containerID="fc90edc84d1b4a0b247a94fbe61aaefeb37cfffbaad11795df3894ac691a8cef" Feb 18 19:57:17 crc kubenswrapper[5007]: E0218 19:57:17.935563 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2d7e3b6c-20a8-4901-8cb2-b856b9e9b617)\"" pod="openstack/watcher-decision-engine-0" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.979476 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-combined-ca-bundle\") pod \"db8a2bd8-30be-415b-a749-985a8539462c\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.979547 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pnrf\" (UniqueName: \"kubernetes.io/projected/db8a2bd8-30be-415b-a749-985a8539462c-kube-api-access-7pnrf\") pod \"db8a2bd8-30be-415b-a749-985a8539462c\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.979631 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db8a2bd8-30be-415b-a749-985a8539462c-logs\") pod \"db8a2bd8-30be-415b-a749-985a8539462c\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.979648 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data-custom\") pod \"db8a2bd8-30be-415b-a749-985a8539462c\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.979671 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data\") pod \"db8a2bd8-30be-415b-a749-985a8539462c\" (UID: \"db8a2bd8-30be-415b-a749-985a8539462c\") " Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.981830 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db8a2bd8-30be-415b-a749-985a8539462c-logs" (OuterVolumeSpecName: "logs") pod "db8a2bd8-30be-415b-a749-985a8539462c" (UID: "db8a2bd8-30be-415b-a749-985a8539462c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.986874 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "db8a2bd8-30be-415b-a749-985a8539462c" (UID: "db8a2bd8-30be-415b-a749-985a8539462c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[5007]: I0218 19:57:17.998188 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db8a2bd8-30be-415b-a749-985a8539462c-kube-api-access-7pnrf" (OuterVolumeSpecName: "kube-api-access-7pnrf") pod "db8a2bd8-30be-415b-a749-985a8539462c" (UID: "db8a2bd8-30be-415b-a749-985a8539462c"). InnerVolumeSpecName "kube-api-access-7pnrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.028538 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db8a2bd8-30be-415b-a749-985a8539462c" (UID: "db8a2bd8-30be-415b-a749-985a8539462c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.057693 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data" (OuterVolumeSpecName: "config-data") pod "db8a2bd8-30be-415b-a749-985a8539462c" (UID: "db8a2bd8-30be-415b-a749-985a8539462c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.082514 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.082545 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pnrf\" (UniqueName: \"kubernetes.io/projected/db8a2bd8-30be-415b-a749-985a8539462c-kube-api-access-7pnrf\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.082557 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db8a2bd8-30be-415b-a749-985a8539462c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.082568 5007 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.082577 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db8a2bd8-30be-415b-a749-985a8539462c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.367910 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" event={"ID":"f737ff09-8d60-4d1c-b53b-6c3c3864ce67","Type":"ContainerStarted","Data":"17e10e3c6149311ee42d55e79626945ff39b17f538138379d6526bc5d334e215"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.371784 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64ccd4d69f-6ps7j" event={"ID":"c50229c8-a34b-4a43-9e27-2cb3bf9100f1","Type":"ContainerStarted","Data":"79cc40d0bc2eecd192f28b89e55b84b153f8d03d71f9b0cfe4d0bdfaa6aa9be4"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.374285 5007 generic.go:334] "Generic (PLEG): container finished" podID="db8a2bd8-30be-415b-a749-985a8539462c" containerID="5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db" exitCode=0 Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.374308 5007 generic.go:334] "Generic (PLEG): container finished" podID="db8a2bd8-30be-415b-a749-985a8539462c" containerID="01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17" exitCode=143 Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.374338 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548d77d6b8-6hqpj" event={"ID":"db8a2bd8-30be-415b-a749-985a8539462c","Type":"ContainerDied","Data":"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.374355 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548d77d6b8-6hqpj" event={"ID":"db8a2bd8-30be-415b-a749-985a8539462c","Type":"ContainerDied","Data":"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.374366 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-548d77d6b8-6hqpj" event={"ID":"db8a2bd8-30be-415b-a749-985a8539462c","Type":"ContainerDied","Data":"b734c771daa7576280e47a789dd7e7488e9756fe1c667919217cd6aa3533e6b3"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.374382 5007 scope.go:117] "RemoveContainer" containerID="5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.374468 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-548d77d6b8-6hqpj" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.385413 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" podStartSLOduration=4.034472864 podStartE2EDuration="7.385393334s" podCreationTimestamp="2026-02-18 19:57:11 +0000 UTC" firstStartedPulling="2026-02-18 19:57:13.155490984 +0000 UTC m=+1117.937610498" lastFinishedPulling="2026-02-18 19:57:16.506411444 +0000 UTC m=+1121.288530968" observedRunningTime="2026-02-18 19:57:18.381194413 +0000 UTC m=+1123.163313947" watchObservedRunningTime="2026-02-18 19:57:18.385393334 +0000 UTC m=+1123.167512858" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.397560 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" event={"ID":"57a8c421-0faa-444c-a2fc-00d5d7485e79","Type":"ContainerStarted","Data":"91800b56eb7f5f31b4fa669af90179f9c1630d0d0592749a85ed6e4b38bf7a57"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.430785 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-646fd64f64-42wzb" event={"ID":"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28","Type":"ContainerStarted","Data":"7c9665cc11a5922fe93f17c847b1a2aca306be7215ef84f67356b9978c4a0a8f"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.438798 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-646fd64f64-42wzb" event={"ID":"a1dfa5c8-be66-47dc-a1b2-524ef0c5ac28","Type":"ContainerStarted","Data":"110b6e60102c8ddfa608218372fc391ca32d879390190720e48f638482b10bce"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.440970 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-64ccd4d69f-6ps7j" podStartSLOduration=4.722673636 podStartE2EDuration="7.440951145s" podCreationTimestamp="2026-02-18 19:57:11 +0000 UTC" firstStartedPulling="2026-02-18 19:57:13.85743693 +0000 UTC m=+1118.639556454" lastFinishedPulling="2026-02-18 19:57:16.575714439 +0000 UTC m=+1121.357833963" observedRunningTime="2026-02-18 19:57:18.407092735 +0000 UTC m=+1123.189212269" watchObservedRunningTime="2026-02-18 19:57:18.440951145 +0000 UTC m=+1123.223070669" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.442659 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5647fc5d54-8f4nk" podStartSLOduration=4.5503565009999996 podStartE2EDuration="7.442652404s" podCreationTimestamp="2026-02-18 19:57:11 +0000 UTC" firstStartedPulling="2026-02-18 19:57:13.663701531 +0000 UTC m=+1118.445821055" lastFinishedPulling="2026-02-18 19:57:16.555997434 +0000 UTC m=+1121.338116958" observedRunningTime="2026-02-18 19:57:18.439093642 +0000 UTC m=+1123.221213186" watchObservedRunningTime="2026-02-18 19:57:18.442652404 +0000 UTC m=+1123.224771928" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.452593 5007 scope.go:117] "RemoveContainer" containerID="01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.453122 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5fdb8776f-pxvts"] Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.453183 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.453421 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.464540 5007 scope.go:117] "RemoveContainer" containerID="fc90edc84d1b4a0b247a94fbe61aaefeb37cfffbaad11795df3894ac691a8cef" Feb 18 19:57:18 crc kubenswrapper[5007]: E0218 19:57:18.464926 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2d7e3b6c-20a8-4901-8cb2-b856b9e9b617)\"" pod="openstack/watcher-decision-engine-0" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.465715 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fdb8776f-pxvts" event={"ID":"41e889c2-3cd7-448d-8115-0f1cf3d6320a","Type":"ContainerStarted","Data":"1079c215f6d40e61bfb3043b99968e40debfaa75b7672af314d80983870365c9"} Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.535472 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-548d77d6b8-6hqpj"] Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.543962 5007 scope.go:117] "RemoveContainer" containerID="5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db" Feb 18 19:57:18 crc kubenswrapper[5007]: E0218 19:57:18.553628 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db\": container with ID starting with 5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db not found: ID does not exist" containerID="5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.553680 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db"} err="failed to get container status \"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db\": rpc error: code = NotFound desc = could not find container \"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db\": container with ID starting with 5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.553709 5007 scope.go:117] "RemoveContainer" containerID="01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17" Feb 18 19:57:18 crc kubenswrapper[5007]: E0218 19:57:18.562595 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17\": container with ID starting with 01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17 not found: ID does not exist" containerID="01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.562649 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17"} err="failed to get container status \"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17\": rpc error: code = NotFound desc = could not find container \"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17\": container with ID starting with 01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17 not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.562677 5007 scope.go:117] "RemoveContainer" containerID="5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.567338 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db"} err="failed to get container status \"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db\": rpc error: code = NotFound desc = could not find container \"5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db\": container with ID starting with 5ecc771b060cbb1b6ddb8632ddcf76043a1c45b24ef3831c5045fbb17f0d07db not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.567390 5007 scope.go:117] "RemoveContainer" containerID="01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.571362 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17"} err="failed to get container status \"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17\": rpc error: code = NotFound desc = could not find container \"01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17\": container with ID starting with 01b93b5affaf11bea1acafab041d3a0a020b418bd3d0794cadabf9729f3dbd17 not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.571428 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-548d77d6b8-6hqpj"] Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.592857 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-76c66f4494-fz2df"] Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.602311 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fdb8776f-pxvts" podStartSLOduration=4.226898175 podStartE2EDuration="7.602288386s" podCreationTimestamp="2026-02-18 19:57:11 +0000 UTC" firstStartedPulling="2026-02-18 19:57:13.178980467 +0000 UTC m=+1117.961099991" lastFinishedPulling="2026-02-18 19:57:16.554370678 +0000 UTC m=+1121.336490202" observedRunningTime="2026-02-18 19:57:18.543752809 +0000 UTC m=+1123.325872333" watchObservedRunningTime="2026-02-18 19:57:18.602288386 +0000 UTC m=+1123.384407910" Feb 18 19:57:18 crc kubenswrapper[5007]: I0218 19:57:18.611363 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-646fd64f64-42wzb" podStartSLOduration=3.611346846 podStartE2EDuration="3.611346846s" podCreationTimestamp="2026-02-18 19:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:18.580944815 +0000 UTC m=+1123.363064339" watchObservedRunningTime="2026-02-18 19:57:18.611346846 +0000 UTC m=+1123.393466370" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.473850 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5fdb8776f-pxvts" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerName="barbican-worker-log" containerID="cri-o://c576ed74566e13d54c4e1f444f6822de09ed17225bb490a9659adb4bca2b462b" gracePeriod=30 Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.473897 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5fdb8776f-pxvts" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerName="barbican-worker" containerID="cri-o://1079c215f6d40e61bfb3043b99968e40debfaa75b7672af314d80983870365c9" gracePeriod=30 Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.474086 5007 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.524593 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.770395 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb55c746c-lpr88"] Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.770930 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bb55c746c-lpr88" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-api" containerID="cri-o://bf06207a9ac56ae4342853d3d79982a6a06136f83560a6fbb0cb101b3b10dc8b" gracePeriod=30 Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.771464 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bb55c746c-lpr88" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-httpd" containerID="cri-o://2de1370864bd8d4db31250bfbcf1aea05924b76da01ff9133f9f833454f7729e" gracePeriod=30 Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.812394 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b9cfd5589-mgdww"] Feb 18 19:57:19 crc kubenswrapper[5007]: E0218 19:57:19.813036 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db8a2bd8-30be-415b-a749-985a8539462c" containerName="barbican-api" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.813061 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="db8a2bd8-30be-415b-a749-985a8539462c" containerName="barbican-api" Feb 18 19:57:19 crc kubenswrapper[5007]: E0218 19:57:19.813094 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db8a2bd8-30be-415b-a749-985a8539462c" containerName="barbican-api-log" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.813100 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="db8a2bd8-30be-415b-a749-985a8539462c" containerName="barbican-api-log" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.813341 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="db8a2bd8-30be-415b-a749-985a8539462c" containerName="barbican-api" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.813361 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="db8a2bd8-30be-415b-a749-985a8539462c" containerName="barbican-api-log" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.814392 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.815398 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.825718 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b9cfd5589-mgdww"] Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.876198 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-bb55c746c-lpr88" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.171:9696/\": read tcp 10.217.0.2:39686->10.217.0.171:9696: read: connection reset by peer" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.923888 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-ovndb-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.924390 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-internal-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.924558 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-public-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.924591 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-config\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.924609 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-combined-ca-bundle\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.924633 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-httpd-config\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.924664 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flcq\" (UniqueName: \"kubernetes.io/projected/f37e878a-a627-494f-b7b6-0d57d8960ade-kube-api-access-2flcq\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:19 crc kubenswrapper[5007]: I0218 19:57:19.927041 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db8a2bd8-30be-415b-a749-985a8539462c" path="/var/lib/kubelet/pods/db8a2bd8-30be-415b-a749-985a8539462c/volumes" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.000369 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7957464b6-627s5" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.028076 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-internal-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.028231 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-public-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.028261 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-config\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.028285 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-combined-ca-bundle\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.028315 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-httpd-config\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.028354 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2flcq\" (UniqueName: \"kubernetes.io/projected/f37e878a-a627-494f-b7b6-0d57d8960ade-kube-api-access-2flcq\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.028413 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-ovndb-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.034801 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-internal-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.035770 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-httpd-config\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.038580 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-combined-ca-bundle\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.039110 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-config\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.039568 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-public-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.039817 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f37e878a-a627-494f-b7b6-0d57d8960ade-ovndb-tls-certs\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.046278 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2flcq\" (UniqueName: \"kubernetes.io/projected/f37e878a-a627-494f-b7b6-0d57d8960ade-kube-api-access-2flcq\") pod \"neutron-5b9cfd5589-mgdww\" (UID: \"f37e878a-a627-494f-b7b6-0d57d8960ade\") " pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.139504 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.486643 5007 generic.go:334] "Generic (PLEG): container finished" podID="c60b1e07-e980-450b-bfb8-c0210edeae7d" containerID="5d64228614eef24af8844ba7bf031ea7c91d80443ad2383f876552e4df811ee5" exitCode=0 Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.486734 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w57rb" event={"ID":"c60b1e07-e980-450b-bfb8-c0210edeae7d","Type":"ContainerDied","Data":"5d64228614eef24af8844ba7bf031ea7c91d80443ad2383f876552e4df811ee5"} Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.490056 5007 generic.go:334] "Generic (PLEG): container finished" podID="b110d261-b844-4d86-9226-c46707c2ecd4" containerID="2de1370864bd8d4db31250bfbcf1aea05924b76da01ff9133f9f833454f7729e" exitCode=0 Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.490123 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb55c746c-lpr88" event={"ID":"b110d261-b844-4d86-9226-c46707c2ecd4","Type":"ContainerDied","Data":"2de1370864bd8d4db31250bfbcf1aea05924b76da01ff9133f9f833454f7729e"} Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.494592 5007 generic.go:334] "Generic (PLEG): container finished" podID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerID="1079c215f6d40e61bfb3043b99968e40debfaa75b7672af314d80983870365c9" exitCode=0 Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.494624 5007 generic.go:334] "Generic (PLEG): container finished" podID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerID="c576ed74566e13d54c4e1f444f6822de09ed17225bb490a9659adb4bca2b462b" exitCode=143 Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.494797 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerName="barbican-keystone-listener-log" containerID="cri-o://7f3447ec62638ac73c1aa271a47cff9bc6f1a43baeda8365aeb8f29e5f658bbd" gracePeriod=30 Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.495065 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fdb8776f-pxvts" event={"ID":"41e889c2-3cd7-448d-8115-0f1cf3d6320a","Type":"ContainerDied","Data":"1079c215f6d40e61bfb3043b99968e40debfaa75b7672af314d80983870365c9"} Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.495097 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fdb8776f-pxvts" event={"ID":"41e889c2-3cd7-448d-8115-0f1cf3d6320a","Type":"ContainerDied","Data":"c576ed74566e13d54c4e1f444f6822de09ed17225bb490a9659adb4bca2b462b"} Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.495118 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerName="barbican-keystone-listener" containerID="cri-o://17e10e3c6149311ee42d55e79626945ff39b17f538138379d6526bc5d334e215" gracePeriod=30 Feb 18 19:57:20 crc kubenswrapper[5007]: I0218 19:57:20.800527 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.158624 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.508020 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.511308 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-784db9c5db-lr9xh" Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.517673 5007 generic.go:334] "Generic (PLEG): container finished" podID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerID="7f3447ec62638ac73c1aa271a47cff9bc6f1a43baeda8365aeb8f29e5f658bbd" exitCode=143 Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.517802 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" event={"ID":"f737ff09-8d60-4d1c-b53b-6c3c3864ce67","Type":"ContainerDied","Data":"7f3447ec62638ac73c1aa271a47cff9bc6f1a43baeda8365aeb8f29e5f658bbd"} Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.577768 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7957464b6-627s5"] Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.578060 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7957464b6-627s5" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon-log" containerID="cri-o://a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3" gracePeriod=30 Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.578718 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7957464b6-627s5" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" containerID="cri-o://60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7" gracePeriod=30 Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.587361 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7957464b6-627s5" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.614137 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:57:21 crc kubenswrapper[5007]: I0218 19:57:21.715675 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-bb55c746c-lpr88" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.171:9696/\": dial tcp 10.217.0.171:9696: connect: connection refused" Feb 18 19:57:22 crc kubenswrapper[5007]: I0218 19:57:22.527624 5007 generic.go:334] "Generic (PLEG): container finished" podID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerID="17e10e3c6149311ee42d55e79626945ff39b17f538138379d6526bc5d334e215" exitCode=0 Feb 18 19:57:22 crc kubenswrapper[5007]: I0218 19:57:22.527704 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" event={"ID":"f737ff09-8d60-4d1c-b53b-6c3c3864ce67","Type":"ContainerDied","Data":"17e10e3c6149311ee42d55e79626945ff39b17f538138379d6526bc5d334e215"} Feb 18 19:57:22 crc kubenswrapper[5007]: I0218 19:57:22.567993 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:57:22 crc kubenswrapper[5007]: I0218 19:57:22.619037 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:22 crc kubenswrapper[5007]: I0218 19:57:22.702290 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-765c898489-2trsf"] Feb 18 19:57:22 crc kubenswrapper[5007]: I0218 19:57:22.702595 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-765c898489-2trsf" podUID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerName="dnsmasq-dns" containerID="cri-o://b55d84867b96866b4985b5b892baa75aeed5f96ed4b31004b4c3b000262a13cb" gracePeriod=10 Feb 18 19:57:22 crc kubenswrapper[5007]: I0218 19:57:22.791238 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:57:22 crc kubenswrapper[5007]: I0218 19:57:22.805796 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:57:23 crc kubenswrapper[5007]: I0218 19:57:23.545015 5007 generic.go:334] "Generic (PLEG): container finished" podID="b110d261-b844-4d86-9226-c46707c2ecd4" containerID="bf06207a9ac56ae4342853d3d79982a6a06136f83560a6fbb0cb101b3b10dc8b" exitCode=0 Feb 18 19:57:23 crc kubenswrapper[5007]: I0218 19:57:23.545075 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb55c746c-lpr88" event={"ID":"b110d261-b844-4d86-9226-c46707c2ecd4","Type":"ContainerDied","Data":"bf06207a9ac56ae4342853d3d79982a6a06136f83560a6fbb0cb101b3b10dc8b"} Feb 18 19:57:23 crc kubenswrapper[5007]: I0218 19:57:23.546666 5007 generic.go:334] "Generic (PLEG): container finished" podID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerID="b55d84867b96866b4985b5b892baa75aeed5f96ed4b31004b4c3b000262a13cb" exitCode=0 Feb 18 19:57:23 crc kubenswrapper[5007]: I0218 19:57:23.547548 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765c898489-2trsf" event={"ID":"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c","Type":"ContainerDied","Data":"b55d84867b96866b4985b5b892baa75aeed5f96ed4b31004b4c3b000262a13cb"} Feb 18 19:57:23 crc kubenswrapper[5007]: I0218 19:57:23.715157 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7957464b6-627s5" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:48166->10.217.0.163:8443: read: connection reset by peer" Feb 18 19:57:23 crc kubenswrapper[5007]: I0218 19:57:23.721170 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:57:23 crc kubenswrapper[5007]: E0218 19:57:23.860444 5007 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb695b866_e754_46f6_8cd8_ef152bfc1a54.slice/crio-conmon-60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.342814 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-765c898489-2trsf" podUID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.511697 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.582093 5007 generic.go:334] "Generic (PLEG): container finished" podID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerID="a1e6b695ad6ef30972f811b240200ac9367d0685df85ce0e84abc110365fba0a" exitCode=137 Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.582135 5007 generic.go:334] "Generic (PLEG): container finished" podID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerID="f4e898144095e4cab8ba7538f7ee70876d31691f3d2d74efd7147971910d92ee" exitCode=137 Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.582250 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b6f5f89-58c25" event={"ID":"203ba913-bbde-4801-b85b-b472aef6a7f6","Type":"ContainerDied","Data":"a1e6b695ad6ef30972f811b240200ac9367d0685df85ce0e84abc110365fba0a"} Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.582283 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b6f5f89-58c25" event={"ID":"203ba913-bbde-4801-b85b-b472aef6a7f6","Type":"ContainerDied","Data":"f4e898144095e4cab8ba7538f7ee70876d31691f3d2d74efd7147971910d92ee"} Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.585261 5007 generic.go:334] "Generic (PLEG): container finished" podID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerID="1fd64123f497b24af14d4d0c821251c079c19ee93a4cb7637f2dfd2218fa1901" exitCode=137 Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.585290 5007 generic.go:334] "Generic (PLEG): container finished" podID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerID="99f4fa773fc11c88474dd29ebd95a27024d8b08f5e4fd87e2072568e925548ae" exitCode=137 Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.585338 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68dccdd97-4vtx9" event={"ID":"1739350e-8a84-4c46-9bd2-7d98b9007dae","Type":"ContainerDied","Data":"1fd64123f497b24af14d4d0c821251c079c19ee93a4cb7637f2dfd2218fa1901"} Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.585378 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68dccdd97-4vtx9" event={"ID":"1739350e-8a84-4c46-9bd2-7d98b9007dae","Type":"ContainerDied","Data":"99f4fa773fc11c88474dd29ebd95a27024d8b08f5e4fd87e2072568e925548ae"} Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.588015 5007 generic.go:334] "Generic (PLEG): container finished" podID="e42c854b-1439-4a01-9a89-c034c0272d32" containerID="1d5242aa9ee3c014a0334fb9e3ee9735daa0229342c6e2f016635da62e01a950" exitCode=137 Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.588046 5007 generic.go:334] "Generic (PLEG): container finished" podID="e42c854b-1439-4a01-9a89-c034c0272d32" containerID="e484dfd4f1aa3c42e9598dd88b4a2457edfbf2c055f7b84b2df9ad77ea124bd0" exitCode=137 Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.588096 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59b5956c5c-bhzjl" event={"ID":"e42c854b-1439-4a01-9a89-c034c0272d32","Type":"ContainerDied","Data":"1d5242aa9ee3c014a0334fb9e3ee9735daa0229342c6e2f016635da62e01a950"} Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.588116 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59b5956c5c-bhzjl" event={"ID":"e42c854b-1439-4a01-9a89-c034c0272d32","Type":"ContainerDied","Data":"e484dfd4f1aa3c42e9598dd88b4a2457edfbf2c055f7b84b2df9ad77ea124bd0"} Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.589661 5007 generic.go:334] "Generic (PLEG): container finished" podID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerID="60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7" exitCode=0 Feb 18 19:57:24 crc kubenswrapper[5007]: I0218 19:57:24.590576 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7957464b6-627s5" event={"ID":"b695b866-e754-46f6-8cd8-ef152bfc1a54","Type":"ContainerDied","Data":"60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7"} Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.554470 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w57rb" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.613317 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w57rb" event={"ID":"c60b1e07-e980-450b-bfb8-c0210edeae7d","Type":"ContainerDied","Data":"ce67be603610dcb020a8504ed0611a033eb0497cbd83a010f080c2cbc4490fa4"} Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.613351 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce67be603610dcb020a8504ed0611a033eb0497cbd83a010f080c2cbc4490fa4" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.613405 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w57rb" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.691085 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c60b1e07-e980-450b-bfb8-c0210edeae7d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c60b1e07-e980-450b-bfb8-c0210edeae7d" (UID: "c60b1e07-e980-450b-bfb8-c0210edeae7d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.691370 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c60b1e07-e980-450b-bfb8-c0210edeae7d-etc-machine-id\") pod \"c60b1e07-e980-450b-bfb8-c0210edeae7d\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.691544 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-combined-ca-bundle\") pod \"c60b1e07-e980-450b-bfb8-c0210edeae7d\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.691799 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grhn7\" (UniqueName: \"kubernetes.io/projected/c60b1e07-e980-450b-bfb8-c0210edeae7d-kube-api-access-grhn7\") pod \"c60b1e07-e980-450b-bfb8-c0210edeae7d\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.694107 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-config-data\") pod \"c60b1e07-e980-450b-bfb8-c0210edeae7d\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.694855 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-db-sync-config-data\") pod \"c60b1e07-e980-450b-bfb8-c0210edeae7d\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.694926 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-scripts\") pod \"c60b1e07-e980-450b-bfb8-c0210edeae7d\" (UID: \"c60b1e07-e980-450b-bfb8-c0210edeae7d\") " Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.695936 5007 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c60b1e07-e980-450b-bfb8-c0210edeae7d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.701969 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-scripts" (OuterVolumeSpecName: "scripts") pod "c60b1e07-e980-450b-bfb8-c0210edeae7d" (UID: "c60b1e07-e980-450b-bfb8-c0210edeae7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.703785 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60b1e07-e980-450b-bfb8-c0210edeae7d-kube-api-access-grhn7" (OuterVolumeSpecName: "kube-api-access-grhn7") pod "c60b1e07-e980-450b-bfb8-c0210edeae7d" (UID: "c60b1e07-e980-450b-bfb8-c0210edeae7d"). InnerVolumeSpecName "kube-api-access-grhn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.726694 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c60b1e07-e980-450b-bfb8-c0210edeae7d" (UID: "c60b1e07-e980-450b-bfb8-c0210edeae7d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.726713 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c60b1e07-e980-450b-bfb8-c0210edeae7d" (UID: "c60b1e07-e980-450b-bfb8-c0210edeae7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.794372 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-config-data" (OuterVolumeSpecName: "config-data") pod "c60b1e07-e980-450b-bfb8-c0210edeae7d" (UID: "c60b1e07-e980-450b-bfb8-c0210edeae7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.797888 5007 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.797929 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.797941 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.797950 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grhn7\" (UniqueName: \"kubernetes.io/projected/c60b1e07-e980-450b-bfb8-c0210edeae7d-kube-api-access-grhn7\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.797960 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60b1e07-e980-450b-bfb8-c0210edeae7d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:25 crc kubenswrapper[5007]: I0218 19:57:25.874321 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:25 crc kubenswrapper[5007]: E0218 19:57:25.985996 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.002469 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-combined-ca-bundle\") pod \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.002584 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data\") pod \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.002673 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data-custom\") pod \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.002737 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgsvn\" (UniqueName: \"kubernetes.io/projected/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-kube-api-access-pgsvn\") pod \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.003114 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-logs\") pod \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\" (UID: \"f737ff09-8d60-4d1c-b53b-6c3c3864ce67\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.003470 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-logs" (OuterVolumeSpecName: "logs") pod "f737ff09-8d60-4d1c-b53b-6c3c3864ce67" (UID: "f737ff09-8d60-4d1c-b53b-6c3c3864ce67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.003875 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.008741 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f737ff09-8d60-4d1c-b53b-6c3c3864ce67" (UID: "f737ff09-8d60-4d1c-b53b-6c3c3864ce67"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.015008 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-kube-api-access-pgsvn" (OuterVolumeSpecName: "kube-api-access-pgsvn") pod "f737ff09-8d60-4d1c-b53b-6c3c3864ce67" (UID: "f737ff09-8d60-4d1c-b53b-6c3c3864ce67"). InnerVolumeSpecName "kube-api-access-pgsvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.042890 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f737ff09-8d60-4d1c-b53b-6c3c3864ce67" (UID: "f737ff09-8d60-4d1c-b53b-6c3c3864ce67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.059102 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data" (OuterVolumeSpecName: "config-data") pod "f737ff09-8d60-4d1c-b53b-6c3c3864ce67" (UID: "f737ff09-8d60-4d1c-b53b-6c3c3864ce67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.074390 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.105311 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgsvn\" (UniqueName: \"kubernetes.io/projected/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-kube-api-access-pgsvn\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.105586 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.105648 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.105708 5007 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f737ff09-8d60-4d1c-b53b-6c3c3864ce67-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.206499 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-combined-ca-bundle\") pod \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.206614 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data-custom\") pod \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.206692 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e889c2-3cd7-448d-8115-0f1cf3d6320a-logs\") pod \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.206770 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data\") pod \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.206839 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9tjx\" (UniqueName: \"kubernetes.io/projected/41e889c2-3cd7-448d-8115-0f1cf3d6320a-kube-api-access-c9tjx\") pod \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\" (UID: \"41e889c2-3cd7-448d-8115-0f1cf3d6320a\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.209728 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41e889c2-3cd7-448d-8115-0f1cf3d6320a-logs" (OuterVolumeSpecName: "logs") pod "41e889c2-3cd7-448d-8115-0f1cf3d6320a" (UID: "41e889c2-3cd7-448d-8115-0f1cf3d6320a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.217324 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "41e889c2-3cd7-448d-8115-0f1cf3d6320a" (UID: "41e889c2-3cd7-448d-8115-0f1cf3d6320a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.220194 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41e889c2-3cd7-448d-8115-0f1cf3d6320a-kube-api-access-c9tjx" (OuterVolumeSpecName: "kube-api-access-c9tjx") pod "41e889c2-3cd7-448d-8115-0f1cf3d6320a" (UID: "41e889c2-3cd7-448d-8115-0f1cf3d6320a"). InnerVolumeSpecName "kube-api-access-c9tjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.262783 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41e889c2-3cd7-448d-8115-0f1cf3d6320a" (UID: "41e889c2-3cd7-448d-8115-0f1cf3d6320a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.295113 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data" (OuterVolumeSpecName: "config-data") pod "41e889c2-3cd7-448d-8115-0f1cf3d6320a" (UID: "41e889c2-3cd7-448d-8115-0f1cf3d6320a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.308716 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9tjx\" (UniqueName: \"kubernetes.io/projected/41e889c2-3cd7-448d-8115-0f1cf3d6320a-kube-api-access-c9tjx\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.308751 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.308760 5007 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.308768 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e889c2-3cd7-448d-8115-0f1cf3d6320a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.308776 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e889c2-3cd7-448d-8115-0f1cf3d6320a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.575870 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.586881 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.634642 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.634674 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.639114 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.650419 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b6f5f89-58c25" event={"ID":"203ba913-bbde-4801-b85b-b472aef6a7f6","Type":"ContainerDied","Data":"85d87c943020108847b49cde7f55e72a1360186524eff4afe4b7614b420a5be6"} Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.650484 5007 scope.go:117] "RemoveContainer" containerID="a1e6b695ad6ef30972f811b240200ac9367d0685df85ce0e84abc110365fba0a" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.650605 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b6f5f89-58c25" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.686993 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af8794a0-c909-4418-ab5a-a4d89492f37f","Type":"ContainerStarted","Data":"77d4787cd2f2c74b1ce1eaeafac411f529ddf44f052e32cbf5b2e58628436644"} Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.687062 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="ceilometer-notification-agent" containerID="cri-o://9a851589bd0be215406165574c6cd5d804eead29f4ff260c78068f33cd79e824" gracePeriod=30 Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.687113 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="proxy-httpd" containerID="cri-o://77d4787cd2f2c74b1ce1eaeafac411f529ddf44f052e32cbf5b2e58628436644" gracePeriod=30 Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.687081 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.687173 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="sg-core" containerID="cri-o://650df3da70c3bec49b2c1ec9be1085770e2df8f56d43164e092d221fcad71d42" gracePeriod=30 Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.729380 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" event={"ID":"f737ff09-8d60-4d1c-b53b-6c3c3864ce67","Type":"ContainerDied","Data":"cfd0521393246442c0aed12296e201145c5292f74fd356cdc7da5f004f0c7336"} Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.729525 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76c66f4494-fz2df" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743095 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-swift-storage-0\") pod \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743135 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ba913-bbde-4801-b85b-b472aef6a7f6-logs\") pod \"203ba913-bbde-4801-b85b-b472aef6a7f6\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743156 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9bth\" (UniqueName: \"kubernetes.io/projected/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-kube-api-access-f9bth\") pod \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743177 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-config\") pod \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743219 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-public-tls-certs\") pod \"b110d261-b844-4d86-9226-c46707c2ecd4\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743251 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e42c854b-1439-4a01-9a89-c034c0272d32-horizon-secret-key\") pod \"e42c854b-1439-4a01-9a89-c034c0272d32\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743272 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-ovndb-tls-certs\") pod \"b110d261-b844-4d86-9226-c46707c2ecd4\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743291 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-internal-tls-certs\") pod \"b110d261-b844-4d86-9226-c46707c2ecd4\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743335 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-scripts\") pod \"203ba913-bbde-4801-b85b-b472aef6a7f6\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743353 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-combined-ca-bundle\") pod \"b110d261-b844-4d86-9226-c46707c2ecd4\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743374 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1739350e-8a84-4c46-9bd2-7d98b9007dae-horizon-secret-key\") pod \"1739350e-8a84-4c46-9bd2-7d98b9007dae\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743454 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1739350e-8a84-4c46-9bd2-7d98b9007dae-logs\") pod \"1739350e-8a84-4c46-9bd2-7d98b9007dae\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743475 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-config-data\") pod \"1739350e-8a84-4c46-9bd2-7d98b9007dae\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743492 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-scripts\") pod \"e42c854b-1439-4a01-9a89-c034c0272d32\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743506 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-config-data\") pod \"203ba913-bbde-4801-b85b-b472aef6a7f6\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743539 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-httpd-config\") pod \"b110d261-b844-4d86-9226-c46707c2ecd4\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743588 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzbfp\" (UniqueName: \"kubernetes.io/projected/1739350e-8a84-4c46-9bd2-7d98b9007dae-kube-api-access-gzbfp\") pod \"1739350e-8a84-4c46-9bd2-7d98b9007dae\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743602 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-scripts\") pod \"1739350e-8a84-4c46-9bd2-7d98b9007dae\" (UID: \"1739350e-8a84-4c46-9bd2-7d98b9007dae\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743619 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42c854b-1439-4a01-9a89-c034c0272d32-logs\") pod \"e42c854b-1439-4a01-9a89-c034c0272d32\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743641 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jltr8\" (UniqueName: \"kubernetes.io/projected/203ba913-bbde-4801-b85b-b472aef6a7f6-kube-api-access-jltr8\") pod \"203ba913-bbde-4801-b85b-b472aef6a7f6\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743674 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-svc\") pod \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743691 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203ba913-bbde-4801-b85b-b472aef6a7f6-horizon-secret-key\") pod \"203ba913-bbde-4801-b85b-b472aef6a7f6\" (UID: \"203ba913-bbde-4801-b85b-b472aef6a7f6\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743707 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n89g\" (UniqueName: \"kubernetes.io/projected/b110d261-b844-4d86-9226-c46707c2ecd4-kube-api-access-4n89g\") pod \"b110d261-b844-4d86-9226-c46707c2ecd4\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743725 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-config-data\") pod \"e42c854b-1439-4a01-9a89-c034c0272d32\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743741 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49m6n\" (UniqueName: \"kubernetes.io/projected/e42c854b-1439-4a01-9a89-c034c0272d32-kube-api-access-49m6n\") pod \"e42c854b-1439-4a01-9a89-c034c0272d32\" (UID: \"e42c854b-1439-4a01-9a89-c034c0272d32\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743769 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-nb\") pod \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743795 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-config\") pod \"b110d261-b844-4d86-9226-c46707c2ecd4\" (UID: \"b110d261-b844-4d86-9226-c46707c2ecd4\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743817 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-sb\") pod \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\" (UID: \"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c\") " Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.743907 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/203ba913-bbde-4801-b85b-b472aef6a7f6-logs" (OuterVolumeSpecName: "logs") pod "203ba913-bbde-4801-b85b-b472aef6a7f6" (UID: "203ba913-bbde-4801-b85b-b472aef6a7f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.744241 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ba913-bbde-4801-b85b-b472aef6a7f6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.759916 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68dccdd97-4vtx9" event={"ID":"1739350e-8a84-4c46-9bd2-7d98b9007dae","Type":"ContainerDied","Data":"7bdd226bb51991f955fdaef1c6f6a15a90aca18f50ca255eae46c721f0119730"} Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.760028 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68dccdd97-4vtx9" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.761652 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e42c854b-1439-4a01-9a89-c034c0272d32-logs" (OuterVolumeSpecName: "logs") pod "e42c854b-1439-4a01-9a89-c034c0272d32" (UID: "e42c854b-1439-4a01-9a89-c034c0272d32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.776480 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42c854b-1439-4a01-9a89-c034c0272d32-kube-api-access-49m6n" (OuterVolumeSpecName: "kube-api-access-49m6n") pod "e42c854b-1439-4a01-9a89-c034c0272d32" (UID: "e42c854b-1439-4a01-9a89-c034c0272d32"). InnerVolumeSpecName "kube-api-access-49m6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.781713 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b110d261-b844-4d86-9226-c46707c2ecd4-kube-api-access-4n89g" (OuterVolumeSpecName: "kube-api-access-4n89g") pod "b110d261-b844-4d86-9226-c46707c2ecd4" (UID: "b110d261-b844-4d86-9226-c46707c2ecd4"). InnerVolumeSpecName "kube-api-access-4n89g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.783340 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1739350e-8a84-4c46-9bd2-7d98b9007dae-logs" (OuterVolumeSpecName: "logs") pod "1739350e-8a84-4c46-9bd2-7d98b9007dae" (UID: "1739350e-8a84-4c46-9bd2-7d98b9007dae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.806873 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b110d261-b844-4d86-9226-c46707c2ecd4" (UID: "b110d261-b844-4d86-9226-c46707c2ecd4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.806990 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-kube-api-access-f9bth" (OuterVolumeSpecName: "kube-api-access-f9bth") pod "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" (UID: "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c"). InnerVolumeSpecName "kube-api-access-f9bth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.807042 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42c854b-1439-4a01-9a89-c034c0272d32-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e42c854b-1439-4a01-9a89-c034c0272d32" (UID: "e42c854b-1439-4a01-9a89-c034c0272d32"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.807943 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765c898489-2trsf" event={"ID":"bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c","Type":"ContainerDied","Data":"c3d5965a0c722bdf99ba4c46cff9ebb12a3ee423b674c313fb6ae4ad697705ef"} Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.808149 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765c898489-2trsf" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.808813 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b9cfd5589-mgdww"] Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.815133 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1739350e-8a84-4c46-9bd2-7d98b9007dae-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1739350e-8a84-4c46-9bd2-7d98b9007dae" (UID: "1739350e-8a84-4c46-9bd2-7d98b9007dae"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.822786 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/203ba913-bbde-4801-b85b-b472aef6a7f6-kube-api-access-jltr8" (OuterVolumeSpecName: "kube-api-access-jltr8") pod "203ba913-bbde-4801-b85b-b472aef6a7f6" (UID: "203ba913-bbde-4801-b85b-b472aef6a7f6"). InnerVolumeSpecName "kube-api-access-jltr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.822864 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1739350e-8a84-4c46-9bd2-7d98b9007dae-kube-api-access-gzbfp" (OuterVolumeSpecName: "kube-api-access-gzbfp") pod "1739350e-8a84-4c46-9bd2-7d98b9007dae" (UID: "1739350e-8a84-4c46-9bd2-7d98b9007dae"). InnerVolumeSpecName "kube-api-access-gzbfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.857507 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59b5956c5c-bhzjl" event={"ID":"e42c854b-1439-4a01-9a89-c034c0272d32","Type":"ContainerDied","Data":"bb45a0ed1f649f890541639f6b57d7f5eafbf15481916ce21f0ea7e3e84d6898"} Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.857608 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59b5956c5c-bhzjl" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866675 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9bth\" (UniqueName: \"kubernetes.io/projected/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-kube-api-access-f9bth\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866708 5007 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e42c854b-1439-4a01-9a89-c034c0272d32-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866721 5007 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1739350e-8a84-4c46-9bd2-7d98b9007dae-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866730 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1739350e-8a84-4c46-9bd2-7d98b9007dae-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866739 5007 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866747 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzbfp\" (UniqueName: \"kubernetes.io/projected/1739350e-8a84-4c46-9bd2-7d98b9007dae-kube-api-access-gzbfp\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866755 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42c854b-1439-4a01-9a89-c034c0272d32-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866764 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jltr8\" (UniqueName: \"kubernetes.io/projected/203ba913-bbde-4801-b85b-b472aef6a7f6-kube-api-access-jltr8\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866775 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n89g\" (UniqueName: \"kubernetes.io/projected/b110d261-b844-4d86-9226-c46707c2ecd4-kube-api-access-4n89g\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.866783 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49m6n\" (UniqueName: \"kubernetes.io/projected/e42c854b-1439-4a01-9a89-c034c0272d32-kube-api-access-49m6n\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.869927 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb55c746c-lpr88" event={"ID":"b110d261-b844-4d86-9226-c46707c2ecd4","Type":"ContainerDied","Data":"5c686746e0cb543e5830a07c125dcacd61faff2ebe2c48008ed1443cd7338917"} Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.870027 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb55c746c-lpr88" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.873878 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874296 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerName="barbican-worker" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874329 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerName="barbican-worker" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874344 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874350 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874358 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerName="barbican-keystone-listener-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874363 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerName="barbican-keystone-listener-log" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874373 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874378 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874385 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874407 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874417 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874423 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874479 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerName="barbican-worker-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874485 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerName="barbican-worker-log" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874499 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerName="dnsmasq-dns" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874505 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerName="dnsmasq-dns" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874513 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerName="init" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874518 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerName="init" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874549 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60b1e07-e980-450b-bfb8-c0210edeae7d" containerName="cinder-db-sync" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874556 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60b1e07-e980-450b-bfb8-c0210edeae7d" containerName="cinder-db-sync" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874566 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerName="barbican-keystone-listener" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874572 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerName="barbican-keystone-listener" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874582 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874588 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874652 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-httpd" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874659 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-httpd" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874670 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874676 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: E0218 19:57:26.874687 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-api" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.874693 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-api" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.875900 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.875921 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.875928 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-api" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.875961 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.875968 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerName="barbican-worker-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.875977 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.875989 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerName="barbican-keystone-listener" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.876000 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60b1e07-e980-450b-bfb8-c0210edeae7d" containerName="cinder-db-sync" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.876009 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" containerName="horizon" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.876018 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" containerName="dnsmasq-dns" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.876044 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" containerName="horizon-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.876052 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" containerName="neutron-httpd" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.876065 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" containerName="barbican-worker" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.876074 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" containerName="barbican-keystone-listener-log" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.880391 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.883106 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203ba913-bbde-4801-b85b-b472aef6a7f6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "203ba913-bbde-4801-b85b-b472aef6a7f6" (UID: "203ba913-bbde-4801-b85b-b472aef6a7f6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.886560 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fdb8776f-pxvts" event={"ID":"41e889c2-3cd7-448d-8115-0f1cf3d6320a","Type":"ContainerDied","Data":"5fed0fd6cfe3534be7399de654efe795cd2a775b699a48863cb8b965301a1916"} Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.886669 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fdb8776f-pxvts" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.898769 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.899699 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.901460 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tck47" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.902530 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.948021 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-scripts" (OuterVolumeSpecName: "scripts") pod "1739350e-8a84-4c46-9bd2-7d98b9007dae" (UID: "1739350e-8a84-4c46-9bd2-7d98b9007dae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.970932 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-scripts\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.970979 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.971019 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.971077 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksnbp\" (UniqueName: \"kubernetes.io/projected/82419489-dde1-4606-807c-95aeb1a19636-kube-api-access-ksnbp\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.971107 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.971190 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82419489-dde1-4606-807c-95aeb1a19636-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.971267 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:26 crc kubenswrapper[5007]: I0218 19:57:26.971281 5007 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/203ba913-bbde-4801-b85b-b472aef6a7f6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.000281 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.008366 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-config-data" (OuterVolumeSpecName: "config-data") pod "203ba913-bbde-4801-b85b-b472aef6a7f6" (UID: "203ba913-bbde-4801-b85b-b472aef6a7f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.070800 5007 scope.go:117] "RemoveContainer" containerID="f4e898144095e4cab8ba7538f7ee70876d31691f3d2d74efd7147971910d92ee" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.071377 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-scripts" (OuterVolumeSpecName: "scripts") pod "203ba913-bbde-4801-b85b-b472aef6a7f6" (UID: "203ba913-bbde-4801-b85b-b472aef6a7f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.072782 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82419489-dde1-4606-807c-95aeb1a19636-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.072883 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-scripts\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.072906 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.072945 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.073004 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksnbp\" (UniqueName: \"kubernetes.io/projected/82419489-dde1-4606-807c-95aeb1a19636-kube-api-access-ksnbp\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.073037 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.073130 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.073141 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/203ba913-bbde-4801-b85b-b472aef6a7f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.074793 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82419489-dde1-4606-807c-95aeb1a19636-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.079196 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66c5ff9dd9-cjthg"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.081017 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.085516 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-scripts\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.103096 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-config-data" (OuterVolumeSpecName: "config-data") pod "1739350e-8a84-4c46-9bd2-7d98b9007dae" (UID: "1739350e-8a84-4c46-9bd2-7d98b9007dae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.109334 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.109757 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.110812 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.115055 5007 scope.go:117] "RemoveContainer" containerID="17e10e3c6149311ee42d55e79626945ff39b17f538138379d6526bc5d334e215" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.118212 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-scripts" (OuterVolumeSpecName: "scripts") pod "e42c854b-1439-4a01-9a89-c034c0272d32" (UID: "e42c854b-1439-4a01-9a89-c034c0272d32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.121588 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksnbp\" (UniqueName: \"kubernetes.io/projected/82419489-dde1-4606-807c-95aeb1a19636-kube-api-access-ksnbp\") pod \"cinder-scheduler-0\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.128790 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66c5ff9dd9-cjthg"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.151071 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-76c66f4494-fz2df"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.166856 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-config-data" (OuterVolumeSpecName: "config-data") pod "e42c854b-1439-4a01-9a89-c034c0272d32" (UID: "e42c854b-1439-4a01-9a89-c034c0272d32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.169557 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-76c66f4494-fz2df"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.171687 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" (UID: "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175561 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bq2\" (UniqueName: \"kubernetes.io/projected/be7a20a5-350e-4f7d-8532-7a6e7198d17f-kube-api-access-c9bq2\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175625 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-svc\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175657 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-swift-storage-0\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175681 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-nb\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175714 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-config\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175803 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-sb\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175877 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175891 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175904 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1739350e-8a84-4c46-9bd2-7d98b9007dae-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.175913 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e42c854b-1439-4a01-9a89-c034c0272d32-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.183918 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5fdb8776f-pxvts"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.190592 5007 scope.go:117] "RemoveContainer" containerID="7f3447ec62638ac73c1aa271a47cff9bc6f1a43baeda8365aeb8f29e5f658bbd" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.191270 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" (UID: "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.191352 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-config" (OuterVolumeSpecName: "config") pod "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" (UID: "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.192192 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5fdb8776f-pxvts"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.218854 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.220612 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.228120 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.235761 5007 scope.go:117] "RemoveContainer" containerID="1fd64123f497b24af14d4d0c821251c079c19ee93a4cb7637f2dfd2218fa1901" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.236518 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.245900 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" (UID: "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.255673 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.261329 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" (UID: "bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.261605 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b110d261-b844-4d86-9226-c46707c2ecd4" (UID: "b110d261-b844-4d86-9226-c46707c2ecd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.275614 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b110d261-b844-4d86-9226-c46707c2ecd4" (UID: "b110d261-b844-4d86-9226-c46707c2ecd4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.278155 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-config\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.278359 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-sb\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.279339 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-config\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.279545 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-sb\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280177 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bq2\" (UniqueName: \"kubernetes.io/projected/be7a20a5-350e-4f7d-8532-7a6e7198d17f-kube-api-access-c9bq2\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280282 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-svc\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280304 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-swift-storage-0\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280342 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-nb\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280495 5007 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280509 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280521 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280531 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280541 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280549 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.280985 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-svc\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.281121 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-nb\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.281543 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-swift-storage-0\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.291440 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b110d261-b844-4d86-9226-c46707c2ecd4" (UID: "b110d261-b844-4d86-9226-c46707c2ecd4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.297387 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bq2\" (UniqueName: \"kubernetes.io/projected/be7a20a5-350e-4f7d-8532-7a6e7198d17f-kube-api-access-c9bq2\") pod \"dnsmasq-dns-66c5ff9dd9-cjthg\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.304968 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-config" (OuterVolumeSpecName: "config") pod "b110d261-b844-4d86-9226-c46707c2ecd4" (UID: "b110d261-b844-4d86-9226-c46707c2ecd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.346234 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68b6f5f89-58c25"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.346612 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b110d261-b844-4d86-9226-c46707c2ecd4" (UID: "b110d261-b844-4d86-9226-c46707c2ecd4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.360996 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68b6f5f89-58c25"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.383625 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.383764 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrqh\" (UniqueName: \"kubernetes.io/projected/315cb998-3184-42ba-9be1-68ddaff29f16-kube-api-access-7xrqh\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.383788 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/315cb998-3184-42ba-9be1-68ddaff29f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.384048 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.384096 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-scripts\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.384113 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.384171 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315cb998-3184-42ba-9be1-68ddaff29f16-logs\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.384269 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.384281 5007 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.384292 5007 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b110d261-b844-4d86-9226-c46707c2ecd4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.418267 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.470054 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-765c898489-2trsf"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.480153 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-765c898489-2trsf"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.485570 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.485615 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrqh\" (UniqueName: \"kubernetes.io/projected/315cb998-3184-42ba-9be1-68ddaff29f16-kube-api-access-7xrqh\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.485641 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/315cb998-3184-42ba-9be1-68ddaff29f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.485711 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.485733 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-scripts\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.485775 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.485802 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315cb998-3184-42ba-9be1-68ddaff29f16-logs\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.486441 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315cb998-3184-42ba-9be1-68ddaff29f16-logs\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.488041 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/315cb998-3184-42ba-9be1-68ddaff29f16-etc-machine-id\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.492161 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-scripts\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.496298 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.500674 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68dccdd97-4vtx9"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.504115 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.506811 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data-custom\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.515945 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68dccdd97-4vtx9"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.520945 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrqh\" (UniqueName: \"kubernetes.io/projected/315cb998-3184-42ba-9be1-68ddaff29f16-kube-api-access-7xrqh\") pod \"cinder-api-0\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.523721 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7957464b6-627s5" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.525044 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59b5956c5c-bhzjl"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.527789 5007 scope.go:117] "RemoveContainer" containerID="99f4fa773fc11c88474dd29ebd95a27024d8b08f5e4fd87e2072568e925548ae" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.539253 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59b5956c5c-bhzjl"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.550495 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.550948 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb55c746c-lpr88"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.562902 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bb55c746c-lpr88"] Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.758608 5007 scope.go:117] "RemoveContainer" containerID="b55d84867b96866b4985b5b892baa75aeed5f96ed4b31004b4c3b000262a13cb" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.857191 5007 scope.go:117] "RemoveContainer" containerID="42510dba712afe0dfec9d7ca0d467d36fd35d97f9b71ab5b241e3fcdabbf0316" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.923278 5007 scope.go:117] "RemoveContainer" containerID="1d5242aa9ee3c014a0334fb9e3ee9735daa0229342c6e2f016635da62e01a950" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.923484 5007 generic.go:334] "Generic (PLEG): container finished" podID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerID="77d4787cd2f2c74b1ce1eaeafac411f529ddf44f052e32cbf5b2e58628436644" exitCode=0 Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.923506 5007 generic.go:334] "Generic (PLEG): container finished" podID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerID="650df3da70c3bec49b2c1ec9be1085770e2df8f56d43164e092d221fcad71d42" exitCode=2 Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.923514 5007 generic.go:334] "Generic (PLEG): container finished" podID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerID="9a851589bd0be215406165574c6cd5d804eead29f4ff260c78068f33cd79e824" exitCode=0 Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.941536 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1739350e-8a84-4c46-9bd2-7d98b9007dae" path="/var/lib/kubelet/pods/1739350e-8a84-4c46-9bd2-7d98b9007dae/volumes" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.942226 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="203ba913-bbde-4801-b85b-b472aef6a7f6" path="/var/lib/kubelet/pods/203ba913-bbde-4801-b85b-b472aef6a7f6/volumes" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.942926 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41e889c2-3cd7-448d-8115-0f1cf3d6320a" path="/var/lib/kubelet/pods/41e889c2-3cd7-448d-8115-0f1cf3d6320a/volumes" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.945472 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b110d261-b844-4d86-9226-c46707c2ecd4" path="/var/lib/kubelet/pods/b110d261-b844-4d86-9226-c46707c2ecd4/volumes" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.946388 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c" path="/var/lib/kubelet/pods/bda4b1dd-7a09-4ee0-9fd4-dafacc7a151c/volumes" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.947686 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42c854b-1439-4a01-9a89-c034c0272d32" path="/var/lib/kubelet/pods/e42c854b-1439-4a01-9a89-c034c0272d32/volumes" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.948512 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f737ff09-8d60-4d1c-b53b-6c3c3864ce67" path="/var/lib/kubelet/pods/f737ff09-8d60-4d1c-b53b-6c3c3864ce67/volumes" Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.949213 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af8794a0-c909-4418-ab5a-a4d89492f37f","Type":"ContainerDied","Data":"77d4787cd2f2c74b1ce1eaeafac411f529ddf44f052e32cbf5b2e58628436644"} Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.949244 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af8794a0-c909-4418-ab5a-a4d89492f37f","Type":"ContainerDied","Data":"650df3da70c3bec49b2c1ec9be1085770e2df8f56d43164e092d221fcad71d42"} Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.949257 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af8794a0-c909-4418-ab5a-a4d89492f37f","Type":"ContainerDied","Data":"9a851589bd0be215406165574c6cd5d804eead29f4ff260c78068f33cd79e824"} Feb 18 19:57:27 crc kubenswrapper[5007]: I0218 19:57:27.977838 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cfd5589-mgdww" event={"ID":"f37e878a-a627-494f-b7b6-0d57d8960ade","Type":"ContainerStarted","Data":"243b65f14279465a8121cc60e1fa84aa9a1fa35f30119360fc4ffdafbb67a719"} Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.148596 5007 scope.go:117] "RemoveContainer" containerID="e484dfd4f1aa3c42e9598dd88b4a2457edfbf2c055f7b84b2df9ad77ea124bd0" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.221645 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.241318 5007 scope.go:117] "RemoveContainer" containerID="2de1370864bd8d4db31250bfbcf1aea05924b76da01ff9133f9f833454f7729e" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.287769 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.299610 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.305055 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-log-httpd\") pod \"af8794a0-c909-4418-ab5a-a4d89492f37f\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.305140 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-run-httpd\") pod \"af8794a0-c909-4418-ab5a-a4d89492f37f\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.305206 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-config-data\") pod \"af8794a0-c909-4418-ab5a-a4d89492f37f\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.305336 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-scripts\") pod \"af8794a0-c909-4418-ab5a-a4d89492f37f\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.305398 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-sg-core-conf-yaml\") pod \"af8794a0-c909-4418-ab5a-a4d89492f37f\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.305417 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-combined-ca-bundle\") pod \"af8794a0-c909-4418-ab5a-a4d89492f37f\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.305601 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrzph\" (UniqueName: \"kubernetes.io/projected/af8794a0-c909-4418-ab5a-a4d89492f37f-kube-api-access-mrzph\") pod \"af8794a0-c909-4418-ab5a-a4d89492f37f\" (UID: \"af8794a0-c909-4418-ab5a-a4d89492f37f\") " Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.306012 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "af8794a0-c909-4418-ab5a-a4d89492f37f" (UID: "af8794a0-c909-4418-ab5a-a4d89492f37f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.307041 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "af8794a0-c909-4418-ab5a-a4d89492f37f" (UID: "af8794a0-c909-4418-ab5a-a4d89492f37f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.315655 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8794a0-c909-4418-ab5a-a4d89492f37f-kube-api-access-mrzph" (OuterVolumeSpecName: "kube-api-access-mrzph") pod "af8794a0-c909-4418-ab5a-a4d89492f37f" (UID: "af8794a0-c909-4418-ab5a-a4d89492f37f"). InnerVolumeSpecName "kube-api-access-mrzph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.318604 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-scripts" (OuterVolumeSpecName: "scripts") pod "af8794a0-c909-4418-ab5a-a4d89492f37f" (UID: "af8794a0-c909-4418-ab5a-a4d89492f37f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.334640 5007 scope.go:117] "RemoveContainer" containerID="bf06207a9ac56ae4342853d3d79982a6a06136f83560a6fbb0cb101b3b10dc8b" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.370785 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "af8794a0-c909-4418-ab5a-a4d89492f37f" (UID: "af8794a0-c909-4418-ab5a-a4d89492f37f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.390508 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af8794a0-c909-4418-ab5a-a4d89492f37f" (UID: "af8794a0-c909-4418-ab5a-a4d89492f37f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.408926 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.408958 5007 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.408967 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.408976 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrzph\" (UniqueName: \"kubernetes.io/projected/af8794a0-c909-4418-ab5a-a4d89492f37f-kube-api-access-mrzph\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.408984 5007 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.408993 5007 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af8794a0-c909-4418-ab5a-a4d89492f37f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.413719 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66c5ff9dd9-cjthg"] Feb 18 19:57:28 crc kubenswrapper[5007]: W0218 19:57:28.443599 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe7a20a5_350e_4f7d_8532_7a6e7198d17f.slice/crio-869d4c56267a4b99f21b5cac4c69f6f917c181e58b5b2ecdb789a18b1461dcbb WatchSource:0}: Error finding container 869d4c56267a4b99f21b5cac4c69f6f917c181e58b5b2ecdb789a18b1461dcbb: Status 404 returned error can't find the container with id 869d4c56267a4b99f21b5cac4c69f6f917c181e58b5b2ecdb789a18b1461dcbb Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.473208 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.488507 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-config-data" (OuterVolumeSpecName: "config-data") pod "af8794a0-c909-4418-ab5a-a4d89492f37f" (UID: "af8794a0-c909-4418-ab5a-a4d89492f37f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.515140 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8794a0-c909-4418-ab5a-a4d89492f37f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.649211 5007 scope.go:117] "RemoveContainer" containerID="1079c215f6d40e61bfb3043b99968e40debfaa75b7672af314d80983870365c9" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.695456 5007 scope.go:117] "RemoveContainer" containerID="c576ed74566e13d54c4e1f444f6822de09ed17225bb490a9659adb4bca2b462b" Feb 18 19:57:28 crc kubenswrapper[5007]: I0218 19:57:28.914265 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-646fd64f64-42wzb" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:28.999983 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cf99f7bdb-5kdlj"] Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.000669 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api-log" containerID="cri-o://9c7e4237a51292f61a672df7abb7d7cfbf610914c90c30ab352313745a46ba00" gracePeriod=30 Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.001186 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api" containerID="cri-o://a68823f2a6faacffdbdb7a75a4632b3b886cf6c2ecd8136a7b40e4ac34d70398" gracePeriod=30 Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.023880 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.181:9311/healthcheck\": EOF" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.035827 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.181:9311/healthcheck\": EOF" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.068746 5007 generic.go:334] "Generic (PLEG): container finished" podID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" containerID="bc7510da3ceaa6d8a006bc3a0c7566c16a11808254150d6ea12897ee3a3be11a" exitCode=0 Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.069689 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" event={"ID":"be7a20a5-350e-4f7d-8532-7a6e7198d17f","Type":"ContainerDied","Data":"bc7510da3ceaa6d8a006bc3a0c7566c16a11808254150d6ea12897ee3a3be11a"} Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.069729 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" event={"ID":"be7a20a5-350e-4f7d-8532-7a6e7198d17f","Type":"ContainerStarted","Data":"869d4c56267a4b99f21b5cac4c69f6f917c181e58b5b2ecdb789a18b1461dcbb"} Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.114543 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"315cb998-3184-42ba-9be1-68ddaff29f16","Type":"ContainerStarted","Data":"33d2e60d2dfd0e77c4822b48b098f48295ee832254044b023208c4eefd0113f1"} Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.265800 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af8794a0-c909-4418-ab5a-a4d89492f37f","Type":"ContainerDied","Data":"f788b687f207e24fea0a023c5aceea35fd4377b8b676338631f4ef96f18a3f07"} Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.265861 5007 scope.go:117] "RemoveContainer" containerID="77d4787cd2f2c74b1ce1eaeafac411f529ddf44f052e32cbf5b2e58628436644" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.266074 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.419215 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cfd5589-mgdww" event={"ID":"f37e878a-a627-494f-b7b6-0d57d8960ade","Type":"ContainerStarted","Data":"2b90143f146a97916d2898382f69e60cb7bea7d8d66d00859342ca6d3bc0a8b6"} Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.419263 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cfd5589-mgdww" event={"ID":"f37e878a-a627-494f-b7b6-0d57d8960ade","Type":"ContainerStarted","Data":"8705e30da56ee8cf3feb7020f47ef781cb736e37bbc682282139c280a680efe1"} Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.420476 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.467392 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.481547 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82419489-dde1-4606-807c-95aeb1a19636","Type":"ContainerStarted","Data":"0ca6bc6be37497fb7e471a42ae49de64fd317b1b1963b04c8eb332759dc2835b"} Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.512500 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.535744 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:29 crc kubenswrapper[5007]: E0218 19:57:29.536318 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="sg-core" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.536337 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="sg-core" Feb 18 19:57:29 crc kubenswrapper[5007]: E0218 19:57:29.536360 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="ceilometer-notification-agent" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.536369 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="ceilometer-notification-agent" Feb 18 19:57:29 crc kubenswrapper[5007]: E0218 19:57:29.536423 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="proxy-httpd" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.536448 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="proxy-httpd" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.536668 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="proxy-httpd" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.536694 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="sg-core" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.536711 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" containerName="ceilometer-notification-agent" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.553225 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.553824 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.560517 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b9cfd5589-mgdww" podStartSLOduration=10.56050177 podStartE2EDuration="10.56050177s" podCreationTimestamp="2026-02-18 19:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:29.47704458 +0000 UTC m=+1134.259164114" watchObservedRunningTime="2026-02-18 19:57:29.56050177 +0000 UTC m=+1134.342621294" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.587609 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.587621 5007 scope.go:117] "RemoveContainer" containerID="650df3da70c3bec49b2c1ec9be1085770e2df8f56d43164e092d221fcad71d42" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.588602 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.639215 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-config-data\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.639256 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-log-httpd\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.639319 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.639346 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-scripts\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.639436 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8wj\" (UniqueName: \"kubernetes.io/projected/86e68c1d-c428-4b53-8e46-7f179faddc5c-kube-api-access-9c8wj\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.639482 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-run-httpd\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.639504 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.714520 5007 scope.go:117] "RemoveContainer" containerID="9a851589bd0be215406165574c6cd5d804eead29f4ff260c78068f33cd79e824" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.744204 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.744284 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-scripts\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.744360 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8wj\" (UniqueName: \"kubernetes.io/projected/86e68c1d-c428-4b53-8e46-7f179faddc5c-kube-api-access-9c8wj\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.744410 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-run-httpd\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.744443 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.744461 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-config-data\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.744477 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-log-httpd\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.745197 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-log-httpd\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.746111 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-run-httpd\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.755484 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-scripts\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.756337 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.778078 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-config-data\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.781158 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.798125 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8wj\" (UniqueName: \"kubernetes.io/projected/86e68c1d-c428-4b53-8e46-7f179faddc5c-kube-api-access-9c8wj\") pod \"ceilometer-0\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.918950 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:29 crc kubenswrapper[5007]: I0218 19:57:29.926752 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8794a0-c909-4418-ab5a-a4d89492f37f" path="/var/lib/kubelet/pods/af8794a0-c909-4418-ab5a-a4d89492f37f/volumes" Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.094273 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.500534 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82419489-dde1-4606-807c-95aeb1a19636","Type":"ContainerStarted","Data":"5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f"} Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.510796 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" event={"ID":"be7a20a5-350e-4f7d-8532-7a6e7198d17f","Type":"ContainerStarted","Data":"71ed0c12e49a861bce007f9cbe64c13a43f0d4d8d87eba94f4e6967732eb2b05"} Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.511959 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.521901 5007 generic.go:334] "Generic (PLEG): container finished" podID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerID="9c7e4237a51292f61a672df7abb7d7cfbf610914c90c30ab352313745a46ba00" exitCode=143 Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.521941 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" event={"ID":"8e524dd1-5e4f-4baf-80fd-56b4e5724a63","Type":"ContainerDied","Data":"9c7e4237a51292f61a672df7abb7d7cfbf610914c90c30ab352313745a46ba00"} Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.525823 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"315cb998-3184-42ba-9be1-68ddaff29f16","Type":"ContainerStarted","Data":"cbfd83f3f5d936c47951cdd31cf99f39a9e2701aab01d716c4dfcbbcebbd6564"} Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.538886 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" podStartSLOduration=4.538869303 podStartE2EDuration="4.538869303s" podCreationTimestamp="2026-02-18 19:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:30.538249495 +0000 UTC m=+1135.320369019" watchObservedRunningTime="2026-02-18 19:57:30.538869303 +0000 UTC m=+1135.320988817" Feb 18 19:57:30 crc kubenswrapper[5007]: I0218 19:57:30.575145 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.590501 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerStarted","Data":"e25f8dd04fdc96b8c79c3a3b3ffbda2a13cd122fbdf43fc59b942944901370d1"} Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.590925 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerStarted","Data":"3968d8490dbc3df76c8802f21be3ac110433223f291b9d50bd58a5621041a079"} Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.606967 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"315cb998-3184-42ba-9be1-68ddaff29f16","Type":"ContainerStarted","Data":"11db28651a2ae8a309f3ccfd761206cb50f97039020f2b20e5c474bc3b64df20"} Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.607046 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" containerName="cinder-api-log" containerID="cri-o://cbfd83f3f5d936c47951cdd31cf99f39a9e2701aab01d716c4dfcbbcebbd6564" gracePeriod=30 Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.607132 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" containerName="cinder-api" containerID="cri-o://11db28651a2ae8a309f3ccfd761206cb50f97039020f2b20e5c474bc3b64df20" gracePeriod=30 Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.607319 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.619686 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82419489-dde1-4606-807c-95aeb1a19636","Type":"ContainerStarted","Data":"73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7"} Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.651349 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.651331477 podStartE2EDuration="4.651331477s" podCreationTimestamp="2026-02-18 19:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:31.636040469 +0000 UTC m=+1136.418160003" watchObservedRunningTime="2026-02-18 19:57:31.651331477 +0000 UTC m=+1136.433450991" Feb 18 19:57:31 crc kubenswrapper[5007]: I0218 19:57:31.660761 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.259645788 podStartE2EDuration="5.660747117s" podCreationTimestamp="2026-02-18 19:57:26 +0000 UTC" firstStartedPulling="2026-02-18 19:57:28.329567602 +0000 UTC m=+1133.111687126" lastFinishedPulling="2026-02-18 19:57:28.730668931 +0000 UTC m=+1133.512788455" observedRunningTime="2026-02-18 19:57:31.658507422 +0000 UTC m=+1136.440626966" watchObservedRunningTime="2026-02-18 19:57:31.660747117 +0000 UTC m=+1136.442866641" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.237963 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.641971 5007 generic.go:334] "Generic (PLEG): container finished" podID="315cb998-3184-42ba-9be1-68ddaff29f16" containerID="11db28651a2ae8a309f3ccfd761206cb50f97039020f2b20e5c474bc3b64df20" exitCode=0 Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.642220 5007 generic.go:334] "Generic (PLEG): container finished" podID="315cb998-3184-42ba-9be1-68ddaff29f16" containerID="cbfd83f3f5d936c47951cdd31cf99f39a9e2701aab01d716c4dfcbbcebbd6564" exitCode=143 Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.642265 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"315cb998-3184-42ba-9be1-68ddaff29f16","Type":"ContainerDied","Data":"11db28651a2ae8a309f3ccfd761206cb50f97039020f2b20e5c474bc3b64df20"} Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.642296 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"315cb998-3184-42ba-9be1-68ddaff29f16","Type":"ContainerDied","Data":"cbfd83f3f5d936c47951cdd31cf99f39a9e2701aab01d716c4dfcbbcebbd6564"} Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.643547 5007 generic.go:334] "Generic (PLEG): container finished" podID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerID="a68823f2a6faacffdbdb7a75a4632b3b886cf6c2ecd8136a7b40e4ac34d70398" exitCode=0 Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.643582 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" event={"ID":"8e524dd1-5e4f-4baf-80fd-56b4e5724a63","Type":"ContainerDied","Data":"a68823f2a6faacffdbdb7a75a4632b3b886cf6c2ecd8136a7b40e4ac34d70398"} Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.643596 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" event={"ID":"8e524dd1-5e4f-4baf-80fd-56b4e5724a63","Type":"ContainerDied","Data":"eb8139f6319e504a54312e5aa1ab5de8ae9ccbc072ff4c4a64029a9c0914234f"} Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.643606 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb8139f6319e504a54312e5aa1ab5de8ae9ccbc072ff4c4a64029a9c0914234f" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.665849 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerStarted","Data":"d32ac9bc5c5b36cb1ccb6281c53243780f1146324542413a634ad1ffed7893a5"} Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.695348 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.794327 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820033 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-combined-ca-bundle\") pod \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820112 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315cb998-3184-42ba-9be1-68ddaff29f16-logs\") pod \"315cb998-3184-42ba-9be1-68ddaff29f16\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820133 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-logs\") pod \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820182 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-combined-ca-bundle\") pod \"315cb998-3184-42ba-9be1-68ddaff29f16\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820200 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data-custom\") pod \"315cb998-3184-42ba-9be1-68ddaff29f16\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820230 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/315cb998-3184-42ba-9be1-68ddaff29f16-etc-machine-id\") pod \"315cb998-3184-42ba-9be1-68ddaff29f16\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820313 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data\") pod \"315cb998-3184-42ba-9be1-68ddaff29f16\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820357 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data-custom\") pod \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820377 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n69mm\" (UniqueName: \"kubernetes.io/projected/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-kube-api-access-n69mm\") pod \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820421 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-scripts\") pod \"315cb998-3184-42ba-9be1-68ddaff29f16\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820481 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xrqh\" (UniqueName: \"kubernetes.io/projected/315cb998-3184-42ba-9be1-68ddaff29f16-kube-api-access-7xrqh\") pod \"315cb998-3184-42ba-9be1-68ddaff29f16\" (UID: \"315cb998-3184-42ba-9be1-68ddaff29f16\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820547 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data\") pod \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\" (UID: \"8e524dd1-5e4f-4baf-80fd-56b4e5724a63\") " Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820624 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/315cb998-3184-42ba-9be1-68ddaff29f16-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "315cb998-3184-42ba-9be1-68ddaff29f16" (UID: "315cb998-3184-42ba-9be1-68ddaff29f16"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.820944 5007 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/315cb998-3184-42ba-9be1-68ddaff29f16-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.824849 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/315cb998-3184-42ba-9be1-68ddaff29f16-logs" (OuterVolumeSpecName: "logs") pod "315cb998-3184-42ba-9be1-68ddaff29f16" (UID: "315cb998-3184-42ba-9be1-68ddaff29f16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.827670 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315cb998-3184-42ba-9be1-68ddaff29f16-kube-api-access-7xrqh" (OuterVolumeSpecName: "kube-api-access-7xrqh") pod "315cb998-3184-42ba-9be1-68ddaff29f16" (UID: "315cb998-3184-42ba-9be1-68ddaff29f16"). InnerVolumeSpecName "kube-api-access-7xrqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.828176 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-kube-api-access-n69mm" (OuterVolumeSpecName: "kube-api-access-n69mm") pod "8e524dd1-5e4f-4baf-80fd-56b4e5724a63" (UID: "8e524dd1-5e4f-4baf-80fd-56b4e5724a63"). InnerVolumeSpecName "kube-api-access-n69mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.829112 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-logs" (OuterVolumeSpecName: "logs") pod "8e524dd1-5e4f-4baf-80fd-56b4e5724a63" (UID: "8e524dd1-5e4f-4baf-80fd-56b4e5724a63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.833547 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-scripts" (OuterVolumeSpecName: "scripts") pod "315cb998-3184-42ba-9be1-68ddaff29f16" (UID: "315cb998-3184-42ba-9be1-68ddaff29f16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.833721 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e524dd1-5e4f-4baf-80fd-56b4e5724a63" (UID: "8e524dd1-5e4f-4baf-80fd-56b4e5724a63"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.836381 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "315cb998-3184-42ba-9be1-68ddaff29f16" (UID: "315cb998-3184-42ba-9be1-68ddaff29f16"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.855615 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "315cb998-3184-42ba-9be1-68ddaff29f16" (UID: "315cb998-3184-42ba-9be1-68ddaff29f16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.856852 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e524dd1-5e4f-4baf-80fd-56b4e5724a63" (UID: "8e524dd1-5e4f-4baf-80fd-56b4e5724a63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.879941 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data" (OuterVolumeSpecName: "config-data") pod "8e524dd1-5e4f-4baf-80fd-56b4e5724a63" (UID: "8e524dd1-5e4f-4baf-80fd-56b4e5724a63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.889196 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data" (OuterVolumeSpecName: "config-data") pod "315cb998-3184-42ba-9be1-68ddaff29f16" (UID: "315cb998-3184-42ba-9be1-68ddaff29f16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.909966 5007 scope.go:117] "RemoveContainer" containerID="fc90edc84d1b4a0b247a94fbe61aaefeb37cfffbaad11795df3894ac691a8cef" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922654 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922676 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922689 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315cb998-3184-42ba-9be1-68ddaff29f16-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922697 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922705 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922715 5007 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922724 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922733 5007 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922742 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n69mm\" (UniqueName: \"kubernetes.io/projected/8e524dd1-5e4f-4baf-80fd-56b4e5724a63-kube-api-access-n69mm\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922750 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315cb998-3184-42ba-9be1-68ddaff29f16-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:32 crc kubenswrapper[5007]: I0218 19:57:32.922758 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xrqh\" (UniqueName: \"kubernetes.io/projected/315cb998-3184-42ba-9be1-68ddaff29f16-kube-api-access-7xrqh\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.677544 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"315cb998-3184-42ba-9be1-68ddaff29f16","Type":"ContainerDied","Data":"33d2e60d2dfd0e77c4822b48b098f48295ee832254044b023208c4eefd0113f1"} Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.677816 5007 scope.go:117] "RemoveContainer" containerID="11db28651a2ae8a309f3ccfd761206cb50f97039020f2b20e5c474bc3b64df20" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.677831 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.680789 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerStarted","Data":"4c818401da43dcc1d8de232b852f7b2225dbd8e458d74bde2cedba9ed105f488"} Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.684349 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerStarted","Data":"dd30fe8b43f8664f8edac5c66fb7193a743f074cdd694f3c4d83e17a2ce64ba5"} Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.684567 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cf99f7bdb-5kdlj" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.725697 5007 scope.go:117] "RemoveContainer" containerID="cbfd83f3f5d936c47951cdd31cf99f39a9e2701aab01d716c4dfcbbcebbd6564" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.728543 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cf99f7bdb-5kdlj"] Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.735521 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5cf99f7bdb-5kdlj"] Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.742408 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.748719 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.765999 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:33 crc kubenswrapper[5007]: E0218 19:57:33.766336 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" containerName="cinder-api-log" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.766353 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" containerName="cinder-api-log" Feb 18 19:57:33 crc kubenswrapper[5007]: E0218 19:57:33.766369 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" containerName="cinder-api" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.766375 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" containerName="cinder-api" Feb 18 19:57:33 crc kubenswrapper[5007]: E0218 19:57:33.766400 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.766407 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api" Feb 18 19:57:33 crc kubenswrapper[5007]: E0218 19:57:33.766438 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api-log" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.766444 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api-log" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.766623 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api-log" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.766637 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" containerName="barbican-api" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.766651 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" containerName="cinder-api-log" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.766664 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" containerName="cinder-api" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.774797 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.778013 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.778080 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.778203 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.784671 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838239 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77b6fd68-728c-4b5e-a930-47bb15d87549-logs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838370 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-config-data\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838408 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b6fd68-728c-4b5e-a930-47bb15d87549-etc-machine-id\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838459 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-scripts\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838498 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838565 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838584 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-public-tls-certs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838605 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtjk\" (UniqueName: \"kubernetes.io/projected/77b6fd68-728c-4b5e-a930-47bb15d87549-kube-api-access-xhtjk\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.838636 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-config-data-custom\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.930215 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="315cb998-3184-42ba-9be1-68ddaff29f16" path="/var/lib/kubelet/pods/315cb998-3184-42ba-9be1-68ddaff29f16/volumes" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.931449 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e524dd1-5e4f-4baf-80fd-56b4e5724a63" path="/var/lib/kubelet/pods/8e524dd1-5e4f-4baf-80fd-56b4e5724a63/volumes" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940046 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-config-data-custom\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940408 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77b6fd68-728c-4b5e-a930-47bb15d87549-logs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940514 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-config-data\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940547 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b6fd68-728c-4b5e-a930-47bb15d87549-etc-machine-id\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940567 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-scripts\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940603 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940644 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940663 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-public-tls-certs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940680 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhtjk\" (UniqueName: \"kubernetes.io/projected/77b6fd68-728c-4b5e-a930-47bb15d87549-kube-api-access-xhtjk\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.940686 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b6fd68-728c-4b5e-a930-47bb15d87549-etc-machine-id\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.941904 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77b6fd68-728c-4b5e-a930-47bb15d87549-logs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.946570 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.946921 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.947736 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-public-tls-certs\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.948416 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-config-data-custom\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.948869 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-scripts\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.951272 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b6fd68-728c-4b5e-a930-47bb15d87549-config-data\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:33 crc kubenswrapper[5007]: I0218 19:57:33.985017 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhtjk\" (UniqueName: \"kubernetes.io/projected/77b6fd68-728c-4b5e-a930-47bb15d87549-kube-api-access-xhtjk\") pod \"cinder-api-0\" (UID: \"77b6fd68-728c-4b5e-a930-47bb15d87549\") " pod="openstack/cinder-api-0" Feb 18 19:57:34 crc kubenswrapper[5007]: I0218 19:57:34.104277 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:57:34 crc kubenswrapper[5007]: I0218 19:57:34.575511 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:57:34 crc kubenswrapper[5007]: W0218 19:57:34.577138 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77b6fd68_728c_4b5e_a930_47bb15d87549.slice/crio-e26fe7fa06b73d8d6a93f4867e152df63a9d47ca1389be2e4a64a5a1cbfffed6 WatchSource:0}: Error finding container e26fe7fa06b73d8d6a93f4867e152df63a9d47ca1389be2e4a64a5a1cbfffed6: Status 404 returned error can't find the container with id e26fe7fa06b73d8d6a93f4867e152df63a9d47ca1389be2e4a64a5a1cbfffed6 Feb 18 19:57:34 crc kubenswrapper[5007]: I0218 19:57:34.695526 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"77b6fd68-728c-4b5e-a930-47bb15d87549","Type":"ContainerStarted","Data":"e26fe7fa06b73d8d6a93f4867e152df63a9d47ca1389be2e4a64a5a1cbfffed6"} Feb 18 19:57:35 crc kubenswrapper[5007]: I0218 19:57:35.711410 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"77b6fd68-728c-4b5e-a930-47bb15d87549","Type":"ContainerStarted","Data":"31b7eefe23a83ec8a65f6adb7fb54438803760029a2c3efbd3170a696ee45b39"} Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.727961 5007 generic.go:334] "Generic (PLEG): container finished" podID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerID="4c818401da43dcc1d8de232b852f7b2225dbd8e458d74bde2cedba9ed105f488" exitCode=1 Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.728041 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerDied","Data":"4c818401da43dcc1d8de232b852f7b2225dbd8e458d74bde2cedba9ed105f488"} Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.728349 5007 scope.go:117] "RemoveContainer" containerID="fc90edc84d1b4a0b247a94fbe61aaefeb37cfffbaad11795df3894ac691a8cef" Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.729523 5007 scope.go:117] "RemoveContainer" containerID="4c818401da43dcc1d8de232b852f7b2225dbd8e458d74bde2cedba9ed105f488" Feb 18 19:57:36 crc kubenswrapper[5007]: E0218 19:57:36.730115 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2d7e3b6c-20a8-4901-8cb2-b856b9e9b617)\"" pod="openstack/watcher-decision-engine-0" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.736373 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"77b6fd68-728c-4b5e-a930-47bb15d87549","Type":"ContainerStarted","Data":"a7d279cbe1b11d0ed9de5ff46333b6e01f9024d12c26a5805d853661667433fe"} Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.738110 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.743638 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerStarted","Data":"4ea5499be3f27afd66a8c78e9d1237dc8adfabe346b2af8b80d4ffc2373c3c86"} Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.744002 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.815190 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.684423196 podStartE2EDuration="7.815170184s" podCreationTimestamp="2026-02-18 19:57:29 +0000 UTC" firstStartedPulling="2026-02-18 19:57:30.612565554 +0000 UTC m=+1135.394685078" lastFinishedPulling="2026-02-18 19:57:35.743312542 +0000 UTC m=+1140.525432066" observedRunningTime="2026-02-18 19:57:36.792256747 +0000 UTC m=+1141.574376291" watchObservedRunningTime="2026-02-18 19:57:36.815170184 +0000 UTC m=+1141.597289708" Feb 18 19:57:36 crc kubenswrapper[5007]: I0218 19:57:36.838377 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.838352578 podStartE2EDuration="3.838352578s" podCreationTimestamp="2026-02-18 19:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:36.824236773 +0000 UTC m=+1141.606356297" watchObservedRunningTime="2026-02-18 19:57:36.838352578 +0000 UTC m=+1141.620472122" Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.376053 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.419621 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.460261 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.515192 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76b5487fdf-pt29p"] Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.515533 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" podUID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerName="dnsmasq-dns" containerID="cri-o://7c2ba9724d8802f4f3e9c0adcc8150b68f4af86f25a5460ea4407b0f4816d740" gracePeriod=10 Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.523554 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7957464b6-627s5" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.618254 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" podUID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.762937 5007 generic.go:334] "Generic (PLEG): container finished" podID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerID="7c2ba9724d8802f4f3e9c0adcc8150b68f4af86f25a5460ea4407b0f4816d740" exitCode=0 Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.763479 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" event={"ID":"cb6286b0-40e6-4ca8-8750-25e256561c3a","Type":"ContainerDied","Data":"7c2ba9724d8802f4f3e9c0adcc8150b68f4af86f25a5460ea4407b0f4816d740"} Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.764314 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="82419489-dde1-4606-807c-95aeb1a19636" containerName="cinder-scheduler" containerID="cri-o://5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f" gracePeriod=30 Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.764508 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="82419489-dde1-4606-807c-95aeb1a19636" containerName="probe" containerID="cri-o://73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7" gracePeriod=30 Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.927656 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.930284 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:57:37 crc kubenswrapper[5007]: I0218 19:57:37.932124 5007 scope.go:117] "RemoveContainer" containerID="4c818401da43dcc1d8de232b852f7b2225dbd8e458d74bde2cedba9ed105f488" Feb 18 19:57:37 crc kubenswrapper[5007]: E0218 19:57:37.932398 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2d7e3b6c-20a8-4901-8cb2-b856b9e9b617)\"" pod="openstack/watcher-decision-engine-0" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.075123 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.143357 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-svc\") pod \"cb6286b0-40e6-4ca8-8750-25e256561c3a\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.143442 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-sb\") pod \"cb6286b0-40e6-4ca8-8750-25e256561c3a\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.143541 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-nb\") pod \"cb6286b0-40e6-4ca8-8750-25e256561c3a\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.143567 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-swift-storage-0\") pod \"cb6286b0-40e6-4ca8-8750-25e256561c3a\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.143675 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhbmt\" (UniqueName: \"kubernetes.io/projected/cb6286b0-40e6-4ca8-8750-25e256561c3a-kube-api-access-qhbmt\") pod \"cb6286b0-40e6-4ca8-8750-25e256561c3a\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.143815 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-config\") pod \"cb6286b0-40e6-4ca8-8750-25e256561c3a\" (UID: \"cb6286b0-40e6-4ca8-8750-25e256561c3a\") " Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.164801 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb6286b0-40e6-4ca8-8750-25e256561c3a-kube-api-access-qhbmt" (OuterVolumeSpecName: "kube-api-access-qhbmt") pod "cb6286b0-40e6-4ca8-8750-25e256561c3a" (UID: "cb6286b0-40e6-4ca8-8750-25e256561c3a"). InnerVolumeSpecName "kube-api-access-qhbmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.208397 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb6286b0-40e6-4ca8-8750-25e256561c3a" (UID: "cb6286b0-40e6-4ca8-8750-25e256561c3a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.214180 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb6286b0-40e6-4ca8-8750-25e256561c3a" (UID: "cb6286b0-40e6-4ca8-8750-25e256561c3a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.219120 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb6286b0-40e6-4ca8-8750-25e256561c3a" (UID: "cb6286b0-40e6-4ca8-8750-25e256561c3a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.219364 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-config" (OuterVolumeSpecName: "config") pod "cb6286b0-40e6-4ca8-8750-25e256561c3a" (UID: "cb6286b0-40e6-4ca8-8750-25e256561c3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.240584 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb6286b0-40e6-4ca8-8750-25e256561c3a" (UID: "cb6286b0-40e6-4ca8-8750-25e256561c3a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.246728 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhbmt\" (UniqueName: \"kubernetes.io/projected/cb6286b0-40e6-4ca8-8750-25e256561c3a-kube-api-access-qhbmt\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.246757 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.246766 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.246778 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.246786 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.246795 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb6286b0-40e6-4ca8-8750-25e256561c3a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.783620 5007 generic.go:334] "Generic (PLEG): container finished" podID="82419489-dde1-4606-807c-95aeb1a19636" containerID="73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7" exitCode=0 Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.783722 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82419489-dde1-4606-807c-95aeb1a19636","Type":"ContainerDied","Data":"73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7"} Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.787061 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" event={"ID":"cb6286b0-40e6-4ca8-8750-25e256561c3a","Type":"ContainerDied","Data":"a9cfea4b13478ab4435d6d63fce99c70cff25042e36bd099f4cab6f355c766e0"} Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.787091 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5487fdf-pt29p" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.787124 5007 scope.go:117] "RemoveContainer" containerID="7c2ba9724d8802f4f3e9c0adcc8150b68f4af86f25a5460ea4407b0f4816d740" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.818840 5007 scope.go:117] "RemoveContainer" containerID="839bec76ce9ccdf5ea4833f3aaa4cd9d7dfa5c29a8a5a00235648b6b4e62b7c2" Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.842336 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76b5487fdf-pt29p"] Feb 18 19:57:38 crc kubenswrapper[5007]: I0218 19:57:38.853012 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76b5487fdf-pt29p"] Feb 18 19:57:39 crc kubenswrapper[5007]: I0218 19:57:39.922477 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb6286b0-40e6-4ca8-8750-25e256561c3a" path="/var/lib/kubelet/pods/cb6286b0-40e6-4ca8-8750-25e256561c3a/volumes" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.484523 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.486013 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.641631 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.738889 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-combined-ca-bundle\") pod \"82419489-dde1-4606-807c-95aeb1a19636\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.738939 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data\") pod \"82419489-dde1-4606-807c-95aeb1a19636\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.738998 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data-custom\") pod \"82419489-dde1-4606-807c-95aeb1a19636\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.739041 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksnbp\" (UniqueName: \"kubernetes.io/projected/82419489-dde1-4606-807c-95aeb1a19636-kube-api-access-ksnbp\") pod \"82419489-dde1-4606-807c-95aeb1a19636\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.739117 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-scripts\") pod \"82419489-dde1-4606-807c-95aeb1a19636\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.739135 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82419489-dde1-4606-807c-95aeb1a19636-etc-machine-id\") pod \"82419489-dde1-4606-807c-95aeb1a19636\" (UID: \"82419489-dde1-4606-807c-95aeb1a19636\") " Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.739450 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82419489-dde1-4606-807c-95aeb1a19636-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "82419489-dde1-4606-807c-95aeb1a19636" (UID: "82419489-dde1-4606-807c-95aeb1a19636"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.747609 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82419489-dde1-4606-807c-95aeb1a19636" (UID: "82419489-dde1-4606-807c-95aeb1a19636"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.747706 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-scripts" (OuterVolumeSpecName: "scripts") pod "82419489-dde1-4606-807c-95aeb1a19636" (UID: "82419489-dde1-4606-807c-95aeb1a19636"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.760591 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82419489-dde1-4606-807c-95aeb1a19636-kube-api-access-ksnbp" (OuterVolumeSpecName: "kube-api-access-ksnbp") pod "82419489-dde1-4606-807c-95aeb1a19636" (UID: "82419489-dde1-4606-807c-95aeb1a19636"). InnerVolumeSpecName "kube-api-access-ksnbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.797106 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82419489-dde1-4606-807c-95aeb1a19636" (UID: "82419489-dde1-4606-807c-95aeb1a19636"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.824527 5007 generic.go:334] "Generic (PLEG): container finished" podID="82419489-dde1-4606-807c-95aeb1a19636" containerID="5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f" exitCode=0 Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.824568 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82419489-dde1-4606-807c-95aeb1a19636","Type":"ContainerDied","Data":"5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f"} Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.824592 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"82419489-dde1-4606-807c-95aeb1a19636","Type":"ContainerDied","Data":"0ca6bc6be37497fb7e471a42ae49de64fd317b1b1963b04c8eb332759dc2835b"} Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.824607 5007 scope.go:117] "RemoveContainer" containerID="73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.824720 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.840957 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.840982 5007 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82419489-dde1-4606-807c-95aeb1a19636-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.840994 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.841004 5007 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.841013 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksnbp\" (UniqueName: \"kubernetes.io/projected/82419489-dde1-4606-807c-95aeb1a19636-kube-api-access-ksnbp\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.853784 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data" (OuterVolumeSpecName: "config-data") pod "82419489-dde1-4606-807c-95aeb1a19636" (UID: "82419489-dde1-4606-807c-95aeb1a19636"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.862604 5007 scope.go:117] "RemoveContainer" containerID="5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.890548 5007 scope.go:117] "RemoveContainer" containerID="73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7" Feb 18 19:57:42 crc kubenswrapper[5007]: E0218 19:57:42.891106 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7\": container with ID starting with 73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7 not found: ID does not exist" containerID="73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.891157 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7"} err="failed to get container status \"73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7\": rpc error: code = NotFound desc = could not find container \"73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7\": container with ID starting with 73936a99b622ef4d53592fa4c29f2bfcd1da9e1653e5ba6b4126a014b9ec5db7 not found: ID does not exist" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.891186 5007 scope.go:117] "RemoveContainer" containerID="5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f" Feb 18 19:57:42 crc kubenswrapper[5007]: E0218 19:57:42.891596 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f\": container with ID starting with 5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f not found: ID does not exist" containerID="5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.891641 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f"} err="failed to get container status \"5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f\": rpc error: code = NotFound desc = could not find container \"5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f\": container with ID starting with 5024f66ee62474a40c1074e0068652ddb35b3bc5324ad1b60c75b216b5d0518f not found: ID does not exist" Feb 18 19:57:42 crc kubenswrapper[5007]: I0218 19:57:42.943313 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82419489-dde1-4606-807c-95aeb1a19636-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.164794 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.179117 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.205612 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:43 crc kubenswrapper[5007]: E0218 19:57:43.206033 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82419489-dde1-4606-807c-95aeb1a19636" containerName="cinder-scheduler" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.206053 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="82419489-dde1-4606-807c-95aeb1a19636" containerName="cinder-scheduler" Feb 18 19:57:43 crc kubenswrapper[5007]: E0218 19:57:43.206081 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82419489-dde1-4606-807c-95aeb1a19636" containerName="probe" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.206088 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="82419489-dde1-4606-807c-95aeb1a19636" containerName="probe" Feb 18 19:57:43 crc kubenswrapper[5007]: E0218 19:57:43.206109 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerName="init" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.206114 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerName="init" Feb 18 19:57:43 crc kubenswrapper[5007]: E0218 19:57:43.206121 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerName="dnsmasq-dns" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.206127 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerName="dnsmasq-dns" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.206316 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="82419489-dde1-4606-807c-95aeb1a19636" containerName="probe" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.206331 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="82419489-dde1-4606-807c-95aeb1a19636" containerName="cinder-scheduler" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.206350 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb6286b0-40e6-4ca8-8750-25e256561c3a" containerName="dnsmasq-dns" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.207347 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.210079 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.217295 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.248770 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-config-data\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.248850 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af9aea6-87a2-4acb-9b1e-b05859fde26e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.248921 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-scripts\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.249025 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxqfw\" (UniqueName: \"kubernetes.io/projected/9af9aea6-87a2-4acb-9b1e-b05859fde26e-kube-api-access-zxqfw\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.249087 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.249186 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.351034 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.351382 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-config-data\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.351517 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af9aea6-87a2-4acb-9b1e-b05859fde26e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.351616 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9af9aea6-87a2-4acb-9b1e-b05859fde26e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.351636 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-scripts\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.351781 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxqfw\" (UniqueName: \"kubernetes.io/projected/9af9aea6-87a2-4acb-9b1e-b05859fde26e-kube-api-access-zxqfw\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.351859 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.355344 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.355388 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.355755 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-config-data\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.355996 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af9aea6-87a2-4acb-9b1e-b05859fde26e-scripts\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.374939 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxqfw\" (UniqueName: \"kubernetes.io/projected/9af9aea6-87a2-4acb-9b1e-b05859fde26e-kube-api-access-zxqfw\") pod \"cinder-scheduler-0\" (UID: \"9af9aea6-87a2-4acb-9b1e-b05859fde26e\") " pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.556692 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:57:43 crc kubenswrapper[5007]: I0218 19:57:43.929922 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82419489-dde1-4606-807c-95aeb1a19636" path="/var/lib/kubelet/pods/82419489-dde1-4606-807c-95aeb1a19636/volumes" Feb 18 19:57:44 crc kubenswrapper[5007]: I0218 19:57:44.058094 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:57:44 crc kubenswrapper[5007]: W0218 19:57:44.063041 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af9aea6_87a2_4acb_9b1e_b05859fde26e.slice/crio-e1991e40483ccf2e67c41918efafea592c7637964d64cccbf85cf770131ec5f3 WatchSource:0}: Error finding container e1991e40483ccf2e67c41918efafea592c7637964d64cccbf85cf770131ec5f3: Status 404 returned error can't find the container with id e1991e40483ccf2e67c41918efafea592c7637964d64cccbf85cf770131ec5f3 Feb 18 19:57:44 crc kubenswrapper[5007]: I0218 19:57:44.879308 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9af9aea6-87a2-4acb-9b1e-b05859fde26e","Type":"ContainerStarted","Data":"b9750e3e8add3d96c7de208b64713959b17e546b5f655976831a09998b317e68"} Feb 18 19:57:44 crc kubenswrapper[5007]: I0218 19:57:44.879664 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9af9aea6-87a2-4acb-9b1e-b05859fde26e","Type":"ContainerStarted","Data":"e1991e40483ccf2e67c41918efafea592c7637964d64cccbf85cf770131ec5f3"} Feb 18 19:57:44 crc kubenswrapper[5007]: I0218 19:57:44.964156 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.102710 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-57dbf48d4-s2z7s" Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.120134 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-66bd9779d-c6rl5" Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.212225 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6988d9b8b6-5qsnc"] Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.217562 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6988d9b8b6-5qsnc" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerName="placement-log" containerID="cri-o://4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605" gracePeriod=30 Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.217800 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6988d9b8b6-5qsnc" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerName="placement-api" containerID="cri-o://3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8" gracePeriod=30 Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.665030 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.665303 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.890754 5007 generic.go:334] "Generic (PLEG): container finished" podID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerID="4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605" exitCode=143 Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.890844 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6988d9b8b6-5qsnc" event={"ID":"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b","Type":"ContainerDied","Data":"4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605"} Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.893019 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9af9aea6-87a2-4acb-9b1e-b05859fde26e","Type":"ContainerStarted","Data":"f83ee2d8190130b7d7998d6697752c1162ec3cedc3912d263ca1fa22a1287f41"} Feb 18 19:57:45 crc kubenswrapper[5007]: I0218 19:57:45.926098 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.923416388 podStartE2EDuration="2.923416388s" podCreationTimestamp="2026-02-18 19:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:45.914531254 +0000 UTC m=+1150.696650788" watchObservedRunningTime="2026-02-18 19:57:45.923416388 +0000 UTC m=+1150.705535912" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.662304 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.779152 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.830496 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-internal-tls-certs\") pod \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.830540 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-config-data\") pod \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.830569 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-public-tls-certs\") pod \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.830652 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-logs\") pod \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.830718 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-combined-ca-bundle\") pod \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.830805 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-scripts\") pod \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.830835 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh6k4\" (UniqueName: \"kubernetes.io/projected/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-kube-api-access-jh6k4\") pod \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\" (UID: \"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b\") " Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.838480 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-logs" (OuterVolumeSpecName: "logs") pod "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" (UID: "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.841668 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-kube-api-access-jh6k4" (OuterVolumeSpecName: "kube-api-access-jh6k4") pod "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" (UID: "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b"). InnerVolumeSpecName "kube-api-access-jh6k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.870980 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-scripts" (OuterVolumeSpecName: "scripts") pod "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" (UID: "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.923759 5007 generic.go:334] "Generic (PLEG): container finished" podID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerID="3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8" exitCode=0 Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.924675 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6988d9b8b6-5qsnc" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.925096 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6988d9b8b6-5qsnc" event={"ID":"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b","Type":"ContainerDied","Data":"3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8"} Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.925118 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6988d9b8b6-5qsnc" event={"ID":"c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b","Type":"ContainerDied","Data":"08b1cd264a07fbc1a494fd040d0df841645b2a8d429ad2f38e9a62d492d09b2b"} Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.925135 5007 scope.go:117] "RemoveContainer" containerID="3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.932835 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.932866 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh6k4\" (UniqueName: \"kubernetes.io/projected/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-kube-api-access-jh6k4\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.932877 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:46 crc kubenswrapper[5007]: I0218 19:57:46.958776 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-config-data" (OuterVolumeSpecName: "config-data") pod "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" (UID: "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.025758 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" (UID: "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.037758 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.037784 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.065644 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" (UID: "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.072672 5007 scope.go:117] "RemoveContainer" containerID="4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.107421 5007 scope.go:117] "RemoveContainer" containerID="3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.108077 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" (UID: "c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:47 crc kubenswrapper[5007]: E0218 19:57:47.115595 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8\": container with ID starting with 3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8 not found: ID does not exist" containerID="3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.115639 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8"} err="failed to get container status \"3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8\": rpc error: code = NotFound desc = could not find container \"3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8\": container with ID starting with 3c4dde69fc9869f7d479452cd451d3fe82ce21e9f615e9971f9ed6e44240e8f8 not found: ID does not exist" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.115667 5007 scope.go:117] "RemoveContainer" containerID="4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605" Feb 18 19:57:47 crc kubenswrapper[5007]: E0218 19:57:47.121634 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605\": container with ID starting with 4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605 not found: ID does not exist" containerID="4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.121695 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605"} err="failed to get container status \"4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605\": rpc error: code = NotFound desc = could not find container \"4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605\": container with ID starting with 4e74dcfccdf8d90f8a4531d4bed82b4d278e5ba53e2abc0d767512a3c4927605 not found: ID does not exist" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.139854 5007 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.139895 5007 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.256314 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6988d9b8b6-5qsnc"] Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.270780 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6988d9b8b6-5qsnc"] Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.524857 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7957464b6-627s5" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.919687 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" path="/var/lib/kubelet/pods/c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b/volumes" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.928980 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.929033 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:57:47 crc kubenswrapper[5007]: I0218 19:57:47.929853 5007 scope.go:117] "RemoveContainer" containerID="4c818401da43dcc1d8de232b852f7b2225dbd8e458d74bde2cedba9ed105f488" Feb 18 19:57:47 crc kubenswrapper[5007]: E0218 19:57:47.930270 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(2d7e3b6c-20a8-4901-8cb2-b856b9e9b617)\"" pod="openstack/watcher-decision-engine-0" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" Feb 18 19:57:48 crc kubenswrapper[5007]: I0218 19:57:48.557783 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.477102 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 19:57:49 crc kubenswrapper[5007]: E0218 19:57:49.477619 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerName="placement-api" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.477636 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerName="placement-api" Feb 18 19:57:49 crc kubenswrapper[5007]: E0218 19:57:49.477652 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerName="placement-log" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.477661 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerName="placement-log" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.477901 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerName="placement-api" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.477929 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04b7ebb-666e-4d1b-8dbf-1d5fe6a4f49b" containerName="placement-log" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.480683 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.483755 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.483813 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.484862 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-twgq6" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.518019 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.582188 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f95a535-dc89-494b-9e82-7e7c490dc529-openstack-config\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.582247 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f95a535-dc89-494b-9e82-7e7c490dc529-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.582315 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqff\" (UniqueName: \"kubernetes.io/projected/7f95a535-dc89-494b-9e82-7e7c490dc529-kube-api-access-nqqff\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.582373 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f95a535-dc89-494b-9e82-7e7c490dc529-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.684064 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f95a535-dc89-494b-9e82-7e7c490dc529-openstack-config\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.684126 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f95a535-dc89-494b-9e82-7e7c490dc529-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.684161 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqff\" (UniqueName: \"kubernetes.io/projected/7f95a535-dc89-494b-9e82-7e7c490dc529-kube-api-access-nqqff\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.684215 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f95a535-dc89-494b-9e82-7e7c490dc529-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.684930 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f95a535-dc89-494b-9e82-7e7c490dc529-openstack-config\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.689235 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f95a535-dc89-494b-9e82-7e7c490dc529-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.691812 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f95a535-dc89-494b-9e82-7e7c490dc529-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.703409 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqff\" (UniqueName: \"kubernetes.io/projected/7f95a535-dc89-494b-9e82-7e7c490dc529-kube-api-access-nqqff\") pod \"openstackclient\" (UID: \"7f95a535-dc89-494b-9e82-7e7c490dc529\") " pod="openstack/openstackclient" Feb 18 19:57:49 crc kubenswrapper[5007]: I0218 19:57:49.809129 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:57:50 crc kubenswrapper[5007]: I0218 19:57:50.166094 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b9cfd5589-mgdww" Feb 18 19:57:50 crc kubenswrapper[5007]: I0218 19:57:50.335187 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6977b8795d-9nwjb"] Feb 18 19:57:50 crc kubenswrapper[5007]: I0218 19:57:50.335674 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6977b8795d-9nwjb" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerName="neutron-api" containerID="cri-o://21cbe18a0113f465ef646b2258c92a08fb33cf33cd6d39fc44868596e5abaff6" gracePeriod=30 Feb 18 19:57:50 crc kubenswrapper[5007]: I0218 19:57:50.336270 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6977b8795d-9nwjb" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerName="neutron-httpd" containerID="cri-o://78f925ced014a3d97f31a5bc4811bc093815cb2e14e282e1a34d374464a16138" gracePeriod=30 Feb 18 19:57:50 crc kubenswrapper[5007]: I0218 19:57:50.400164 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:57:51 crc kubenswrapper[5007]: I0218 19:57:51.002916 5007 generic.go:334] "Generic (PLEG): container finished" podID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerID="78f925ced014a3d97f31a5bc4811bc093815cb2e14e282e1a34d374464a16138" exitCode=0 Feb 18 19:57:51 crc kubenswrapper[5007]: I0218 19:57:51.002993 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6977b8795d-9nwjb" event={"ID":"a5926c07-fe6b-49a1-bb8a-6b630aa69d76","Type":"ContainerDied","Data":"78f925ced014a3d97f31a5bc4811bc093815cb2e14e282e1a34d374464a16138"} Feb 18 19:57:51 crc kubenswrapper[5007]: I0218 19:57:51.003919 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7f95a535-dc89-494b-9e82-7e7c490dc529","Type":"ContainerStarted","Data":"a614393dfcc20e6d9f03580d9bf87c5342e1a66c040d519b02ebe144aca36f23"} Feb 18 19:57:51 crc kubenswrapper[5007]: I0218 19:57:51.950459 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7957464b6-627s5" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.018931 5007 generic.go:334] "Generic (PLEG): container finished" podID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerID="a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3" exitCode=137 Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.018983 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7957464b6-627s5" event={"ID":"b695b866-e754-46f6-8cd8-ef152bfc1a54","Type":"ContainerDied","Data":"a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3"} Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.019011 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7957464b6-627s5" event={"ID":"b695b866-e754-46f6-8cd8-ef152bfc1a54","Type":"ContainerDied","Data":"56d9874e011ddb5e2d4aff8156a73bc12319723e11e4596ba52d3e14739b7cd4"} Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.019031 5007 scope.go:117] "RemoveContainer" containerID="60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.020160 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7957464b6-627s5" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.041752 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-scripts\") pod \"b695b866-e754-46f6-8cd8-ef152bfc1a54\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.043609 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-secret-key\") pod \"b695b866-e754-46f6-8cd8-ef152bfc1a54\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.043652 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-tls-certs\") pod \"b695b866-e754-46f6-8cd8-ef152bfc1a54\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.043728 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b695b866-e754-46f6-8cd8-ef152bfc1a54-logs\") pod \"b695b866-e754-46f6-8cd8-ef152bfc1a54\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.043800 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-combined-ca-bundle\") pod \"b695b866-e754-46f6-8cd8-ef152bfc1a54\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.043869 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmqtt\" (UniqueName: \"kubernetes.io/projected/b695b866-e754-46f6-8cd8-ef152bfc1a54-kube-api-access-lmqtt\") pod \"b695b866-e754-46f6-8cd8-ef152bfc1a54\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.043892 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-config-data\") pod \"b695b866-e754-46f6-8cd8-ef152bfc1a54\" (UID: \"b695b866-e754-46f6-8cd8-ef152bfc1a54\") " Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.045988 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b695b866-e754-46f6-8cd8-ef152bfc1a54-logs" (OuterVolumeSpecName: "logs") pod "b695b866-e754-46f6-8cd8-ef152bfc1a54" (UID: "b695b866-e754-46f6-8cd8-ef152bfc1a54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.049610 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b695b866-e754-46f6-8cd8-ef152bfc1a54-kube-api-access-lmqtt" (OuterVolumeSpecName: "kube-api-access-lmqtt") pod "b695b866-e754-46f6-8cd8-ef152bfc1a54" (UID: "b695b866-e754-46f6-8cd8-ef152bfc1a54"). InnerVolumeSpecName "kube-api-access-lmqtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.081896 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b695b866-e754-46f6-8cd8-ef152bfc1a54" (UID: "b695b866-e754-46f6-8cd8-ef152bfc1a54"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.091161 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-config-data" (OuterVolumeSpecName: "config-data") pod "b695b866-e754-46f6-8cd8-ef152bfc1a54" (UID: "b695b866-e754-46f6-8cd8-ef152bfc1a54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.099803 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b695b866-e754-46f6-8cd8-ef152bfc1a54" (UID: "b695b866-e754-46f6-8cd8-ef152bfc1a54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.111689 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b695b866-e754-46f6-8cd8-ef152bfc1a54" (UID: "b695b866-e754-46f6-8cd8-ef152bfc1a54"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.123945 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-scripts" (OuterVolumeSpecName: "scripts") pod "b695b866-e754-46f6-8cd8-ef152bfc1a54" (UID: "b695b866-e754-46f6-8cd8-ef152bfc1a54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.146176 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.146210 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmqtt\" (UniqueName: \"kubernetes.io/projected/b695b866-e754-46f6-8cd8-ef152bfc1a54-kube-api-access-lmqtt\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.146222 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.146233 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b695b866-e754-46f6-8cd8-ef152bfc1a54-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.146243 5007 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.146251 5007 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b695b866-e754-46f6-8cd8-ef152bfc1a54-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.146259 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b695b866-e754-46f6-8cd8-ef152bfc1a54-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.203016 5007 scope.go:117] "RemoveContainer" containerID="a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.226347 5007 scope.go:117] "RemoveContainer" containerID="60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7" Feb 18 19:57:52 crc kubenswrapper[5007]: E0218 19:57:52.226773 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7\": container with ID starting with 60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7 not found: ID does not exist" containerID="60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.226798 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7"} err="failed to get container status \"60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7\": rpc error: code = NotFound desc = could not find container \"60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7\": container with ID starting with 60420fc278ba8adbcc79b5a3e4fa916813bebaae8c89558b24f83f89cb0e70a7 not found: ID does not exist" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.226817 5007 scope.go:117] "RemoveContainer" containerID="a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3" Feb 18 19:57:52 crc kubenswrapper[5007]: E0218 19:57:52.227170 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3\": container with ID starting with a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3 not found: ID does not exist" containerID="a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.227184 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3"} err="failed to get container status \"a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3\": rpc error: code = NotFound desc = could not find container \"a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3\": container with ID starting with a07da9516740c562f01ed454a7c579ae1ed349f581ab33fd001aac5c2a7f0eb3 not found: ID does not exist" Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.366076 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7957464b6-627s5"] Feb 18 19:57:52 crc kubenswrapper[5007]: I0218 19:57:52.373381 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7957464b6-627s5"] Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.412998 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f7b76fc49-wc8vb"] Feb 18 19:57:53 crc kubenswrapper[5007]: E0218 19:57:53.414178 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.414208 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" Feb 18 19:57:53 crc kubenswrapper[5007]: E0218 19:57:53.414235 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon-log" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.414244 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon-log" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.414599 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.414645 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" containerName="horizon-log" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.416811 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.418091 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.419824 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.440263 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.443115 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f7b76fc49-wc8vb"] Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.569647 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-config-data\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.569706 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-public-tls-certs\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.569749 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/281212cf-1ff4-48bb-bfc7-73f4381ed878-log-httpd\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.569831 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/281212cf-1ff4-48bb-bfc7-73f4381ed878-etc-swift\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.569874 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/281212cf-1ff4-48bb-bfc7-73f4381ed878-run-httpd\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.569889 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-internal-tls-certs\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.569990 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-combined-ca-bundle\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.570062 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrkk\" (UniqueName: \"kubernetes.io/projected/281212cf-1ff4-48bb-bfc7-73f4381ed878-kube-api-access-lqrkk\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.671459 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-config-data\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.671511 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-public-tls-certs\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.671541 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/281212cf-1ff4-48bb-bfc7-73f4381ed878-log-httpd\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.671576 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/281212cf-1ff4-48bb-bfc7-73f4381ed878-etc-swift\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.671590 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/281212cf-1ff4-48bb-bfc7-73f4381ed878-run-httpd\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.671606 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-internal-tls-certs\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.671683 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-combined-ca-bundle\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.671718 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrkk\" (UniqueName: \"kubernetes.io/projected/281212cf-1ff4-48bb-bfc7-73f4381ed878-kube-api-access-lqrkk\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.672156 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/281212cf-1ff4-48bb-bfc7-73f4381ed878-log-httpd\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.672241 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/281212cf-1ff4-48bb-bfc7-73f4381ed878-run-httpd\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.675974 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-public-tls-certs\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.676036 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-internal-tls-certs\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.676985 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/281212cf-1ff4-48bb-bfc7-73f4381ed878-etc-swift\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.677595 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-combined-ca-bundle\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.678388 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281212cf-1ff4-48bb-bfc7-73f4381ed878-config-data\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.690804 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrkk\" (UniqueName: \"kubernetes.io/projected/281212cf-1ff4-48bb-bfc7-73f4381ed878-kube-api-access-lqrkk\") pod \"swift-proxy-f7b76fc49-wc8vb\" (UID: \"281212cf-1ff4-48bb-bfc7-73f4381ed878\") " pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.754544 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.788485 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:57:53 crc kubenswrapper[5007]: I0218 19:57:53.919832 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b695b866-e754-46f6-8cd8-ef152bfc1a54" path="/var/lib/kubelet/pods/b695b866-e754-46f6-8cd8-ef152bfc1a54/volumes" Feb 18 19:57:54 crc kubenswrapper[5007]: I0218 19:57:54.426530 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:54 crc kubenswrapper[5007]: I0218 19:57:54.426996 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="ceilometer-central-agent" containerID="cri-o://e25f8dd04fdc96b8c79c3a3b3ffbda2a13cd122fbdf43fc59b942944901370d1" gracePeriod=30 Feb 18 19:57:54 crc kubenswrapper[5007]: I0218 19:57:54.427107 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="ceilometer-notification-agent" containerID="cri-o://d32ac9bc5c5b36cb1ccb6281c53243780f1146324542413a634ad1ffed7893a5" gracePeriod=30 Feb 18 19:57:54 crc kubenswrapper[5007]: I0218 19:57:54.427098 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="sg-core" containerID="cri-o://dd30fe8b43f8664f8edac5c66fb7193a743f074cdd694f3c4d83e17a2ce64ba5" gracePeriod=30 Feb 18 19:57:54 crc kubenswrapper[5007]: I0218 19:57:54.427125 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="proxy-httpd" containerID="cri-o://4ea5499be3f27afd66a8c78e9d1237dc8adfabe346b2af8b80d4ffc2373c3c86" gracePeriod=30 Feb 18 19:57:54 crc kubenswrapper[5007]: I0218 19:57:54.464746 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.189:3000/\": EOF" Feb 18 19:57:54 crc kubenswrapper[5007]: I0218 19:57:54.489978 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f7b76fc49-wc8vb"] Feb 18 19:57:54 crc kubenswrapper[5007]: E0218 19:57:54.707307 5007 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86e68c1d_c428_4b53_8e46_7f179faddc5c.slice/crio-dd30fe8b43f8664f8edac5c66fb7193a743f074cdd694f3c4d83e17a2ce64ba5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86e68c1d_c428_4b53_8e46_7f179faddc5c.slice/crio-conmon-dd30fe8b43f8664f8edac5c66fb7193a743f074cdd694f3c4d83e17a2ce64ba5.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.063307 5007 generic.go:334] "Generic (PLEG): container finished" podID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerID="4ea5499be3f27afd66a8c78e9d1237dc8adfabe346b2af8b80d4ffc2373c3c86" exitCode=0 Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.063588 5007 generic.go:334] "Generic (PLEG): container finished" podID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerID="dd30fe8b43f8664f8edac5c66fb7193a743f074cdd694f3c4d83e17a2ce64ba5" exitCode=2 Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.063598 5007 generic.go:334] "Generic (PLEG): container finished" podID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerID="e25f8dd04fdc96b8c79c3a3b3ffbda2a13cd122fbdf43fc59b942944901370d1" exitCode=0 Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.063643 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerDied","Data":"4ea5499be3f27afd66a8c78e9d1237dc8adfabe346b2af8b80d4ffc2373c3c86"} Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.063671 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerDied","Data":"dd30fe8b43f8664f8edac5c66fb7193a743f074cdd694f3c4d83e17a2ce64ba5"} Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.063680 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerDied","Data":"e25f8dd04fdc96b8c79c3a3b3ffbda2a13cd122fbdf43fc59b942944901370d1"} Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.066332 5007 generic.go:334] "Generic (PLEG): container finished" podID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerID="21cbe18a0113f465ef646b2258c92a08fb33cf33cd6d39fc44868596e5abaff6" exitCode=0 Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.066416 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6977b8795d-9nwjb" event={"ID":"a5926c07-fe6b-49a1-bb8a-6b630aa69d76","Type":"ContainerDied","Data":"21cbe18a0113f465ef646b2258c92a08fb33cf33cd6d39fc44868596e5abaff6"} Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.068993 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f7b76fc49-wc8vb" event={"ID":"281212cf-1ff4-48bb-bfc7-73f4381ed878","Type":"ContainerStarted","Data":"3e7c351ed507483fff66659e93944de595fe5996219207d37b40b1717e3d7efe"} Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.069031 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f7b76fc49-wc8vb" event={"ID":"281212cf-1ff4-48bb-bfc7-73f4381ed878","Type":"ContainerStarted","Data":"70b6260a3104eedf7799e4f621722a960c92e9e9ed1dddaa64a9ccbccf87c362"} Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.236895 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.312911 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-ovndb-tls-certs\") pod \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.313029 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-config\") pod \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.313694 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-httpd-config\") pod \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.313792 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbl96\" (UniqueName: \"kubernetes.io/projected/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-kube-api-access-kbl96\") pod \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.313938 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-combined-ca-bundle\") pod \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\" (UID: \"a5926c07-fe6b-49a1-bb8a-6b630aa69d76\") " Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.317992 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a5926c07-fe6b-49a1-bb8a-6b630aa69d76" (UID: "a5926c07-fe6b-49a1-bb8a-6b630aa69d76"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.318294 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-kube-api-access-kbl96" (OuterVolumeSpecName: "kube-api-access-kbl96") pod "a5926c07-fe6b-49a1-bb8a-6b630aa69d76" (UID: "a5926c07-fe6b-49a1-bb8a-6b630aa69d76"). InnerVolumeSpecName "kube-api-access-kbl96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.374629 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5926c07-fe6b-49a1-bb8a-6b630aa69d76" (UID: "a5926c07-fe6b-49a1-bb8a-6b630aa69d76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.398910 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-config" (OuterVolumeSpecName: "config") pod "a5926c07-fe6b-49a1-bb8a-6b630aa69d76" (UID: "a5926c07-fe6b-49a1-bb8a-6b630aa69d76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.403818 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a5926c07-fe6b-49a1-bb8a-6b630aa69d76" (UID: "a5926c07-fe6b-49a1-bb8a-6b630aa69d76"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.416159 5007 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.416211 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.416221 5007 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.416231 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbl96\" (UniqueName: \"kubernetes.io/projected/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-kube-api-access-kbl96\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:55 crc kubenswrapper[5007]: I0218 19:57:55.416242 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5926c07-fe6b-49a1-bb8a-6b630aa69d76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:56 crc kubenswrapper[5007]: I0218 19:57:56.078117 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6977b8795d-9nwjb" event={"ID":"a5926c07-fe6b-49a1-bb8a-6b630aa69d76","Type":"ContainerDied","Data":"2d4d8da2f1fbfedd9480f7ad7acfd6b7ee7340ebb1f8425c18083792a7b268c4"} Feb 18 19:57:56 crc kubenswrapper[5007]: I0218 19:57:56.078496 5007 scope.go:117] "RemoveContainer" containerID="78f925ced014a3d97f31a5bc4811bc093815cb2e14e282e1a34d374464a16138" Feb 18 19:57:56 crc kubenswrapper[5007]: I0218 19:57:56.078423 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6977b8795d-9nwjb" Feb 18 19:57:56 crc kubenswrapper[5007]: I0218 19:57:56.084649 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f7b76fc49-wc8vb" event={"ID":"281212cf-1ff4-48bb-bfc7-73f4381ed878","Type":"ContainerStarted","Data":"436759e7e4f2d2af06611ed0000571526e6cd5e09b941697264250ba2c630ac0"} Feb 18 19:57:56 crc kubenswrapper[5007]: I0218 19:57:56.084857 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:56 crc kubenswrapper[5007]: I0218 19:57:56.109113 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6977b8795d-9nwjb"] Feb 18 19:57:56 crc kubenswrapper[5007]: I0218 19:57:56.121119 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6977b8795d-9nwjb"] Feb 18 19:57:56 crc kubenswrapper[5007]: I0218 19:57:56.131277 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f7b76fc49-wc8vb" podStartSLOduration=3.131264118 podStartE2EDuration="3.131264118s" podCreationTimestamp="2026-02-18 19:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:56.11736117 +0000 UTC m=+1160.899480694" watchObservedRunningTime="2026-02-18 19:57:56.131264118 +0000 UTC m=+1160.913383642" Feb 18 19:57:57 crc kubenswrapper[5007]: I0218 19:57:57.095485 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:57:57 crc kubenswrapper[5007]: I0218 19:57:57.929688 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" path="/var/lib/kubelet/pods/a5926c07-fe6b-49a1-bb8a-6b630aa69d76/volumes" Feb 18 19:57:58 crc kubenswrapper[5007]: I0218 19:57:58.110005 5007 generic.go:334] "Generic (PLEG): container finished" podID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerID="d32ac9bc5c5b36cb1ccb6281c53243780f1146324542413a634ad1ffed7893a5" exitCode=0 Feb 18 19:57:58 crc kubenswrapper[5007]: I0218 19:57:58.110899 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerDied","Data":"d32ac9bc5c5b36cb1ccb6281c53243780f1146324542413a634ad1ffed7893a5"} Feb 18 19:57:59 crc kubenswrapper[5007]: I0218 19:57:59.920935 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.189:3000/\": dial tcp 10.217.0.189:3000: connect: connection refused" Feb 18 19:58:00 crc kubenswrapper[5007]: I0218 19:58:00.909259 5007 scope.go:117] "RemoveContainer" containerID="4c818401da43dcc1d8de232b852f7b2225dbd8e458d74bde2cedba9ed105f488" Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.446269 5007 scope.go:117] "RemoveContainer" containerID="21cbe18a0113f465ef646b2258c92a08fb33cf33cd6d39fc44868596e5abaff6" Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.799480 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.954734 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-log-httpd\") pod \"86e68c1d-c428-4b53-8e46-7f179faddc5c\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.954794 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-sg-core-conf-yaml\") pod \"86e68c1d-c428-4b53-8e46-7f179faddc5c\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.954917 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-config-data\") pod \"86e68c1d-c428-4b53-8e46-7f179faddc5c\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.954941 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-scripts\") pod \"86e68c1d-c428-4b53-8e46-7f179faddc5c\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.954976 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-combined-ca-bundle\") pod \"86e68c1d-c428-4b53-8e46-7f179faddc5c\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.955045 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c8wj\" (UniqueName: \"kubernetes.io/projected/86e68c1d-c428-4b53-8e46-7f179faddc5c-kube-api-access-9c8wj\") pod \"86e68c1d-c428-4b53-8e46-7f179faddc5c\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.955127 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-run-httpd\") pod \"86e68c1d-c428-4b53-8e46-7f179faddc5c\" (UID: \"86e68c1d-c428-4b53-8e46-7f179faddc5c\") " Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.955624 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "86e68c1d-c428-4b53-8e46-7f179faddc5c" (UID: "86e68c1d-c428-4b53-8e46-7f179faddc5c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.955855 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "86e68c1d-c428-4b53-8e46-7f179faddc5c" (UID: "86e68c1d-c428-4b53-8e46-7f179faddc5c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.959204 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e68c1d-c428-4b53-8e46-7f179faddc5c-kube-api-access-9c8wj" (OuterVolumeSpecName: "kube-api-access-9c8wj") pod "86e68c1d-c428-4b53-8e46-7f179faddc5c" (UID: "86e68c1d-c428-4b53-8e46-7f179faddc5c"). InnerVolumeSpecName "kube-api-access-9c8wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.960269 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-scripts" (OuterVolumeSpecName: "scripts") pod "86e68c1d-c428-4b53-8e46-7f179faddc5c" (UID: "86e68c1d-c428-4b53-8e46-7f179faddc5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:01 crc kubenswrapper[5007]: I0218 19:58:01.988403 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "86e68c1d-c428-4b53-8e46-7f179faddc5c" (UID: "86e68c1d-c428-4b53-8e46-7f179faddc5c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.040683 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86e68c1d-c428-4b53-8e46-7f179faddc5c" (UID: "86e68c1d-c428-4b53-8e46-7f179faddc5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.059368 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c8wj\" (UniqueName: \"kubernetes.io/projected/86e68c1d-c428-4b53-8e46-7f179faddc5c-kube-api-access-9c8wj\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.059413 5007 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.059453 5007 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86e68c1d-c428-4b53-8e46-7f179faddc5c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.059471 5007 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.059487 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.059506 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.083370 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-config-data" (OuterVolumeSpecName: "config-data") pod "86e68c1d-c428-4b53-8e46-7f179faddc5c" (UID: "86e68c1d-c428-4b53-8e46-7f179faddc5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.159970 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerStarted","Data":"90095ecad4ac73cf9c323a5cbf00de3632fcfc796a1d133cfb0f4ac1088cf620"} Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.161748 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86e68c1d-c428-4b53-8e46-7f179faddc5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.168588 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86e68c1d-c428-4b53-8e46-7f179faddc5c","Type":"ContainerDied","Data":"3968d8490dbc3df76c8802f21be3ac110433223f291b9d50bd58a5621041a079"} Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.168632 5007 scope.go:117] "RemoveContainer" containerID="4ea5499be3f27afd66a8c78e9d1237dc8adfabe346b2af8b80d4ffc2373c3c86" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.168719 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.176626 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7f95a535-dc89-494b-9e82-7e7c490dc529","Type":"ContainerStarted","Data":"4398a995f5dd55c75eb9d30202cb776b85c501a66631dc1d709d0d0a561c55b2"} Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.207499 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.210011 5007 scope.go:117] "RemoveContainer" containerID="dd30fe8b43f8664f8edac5c66fb7193a743f074cdd694f3c4d83e17a2ce64ba5" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.213984 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.234016 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.0893534049999998 podStartE2EDuration="13.23399465s" podCreationTimestamp="2026-02-18 19:57:49 +0000 UTC" firstStartedPulling="2026-02-18 19:57:50.392944788 +0000 UTC m=+1155.175064312" lastFinishedPulling="2026-02-18 19:58:01.537586033 +0000 UTC m=+1166.319705557" observedRunningTime="2026-02-18 19:58:02.223309154 +0000 UTC m=+1167.005428688" watchObservedRunningTime="2026-02-18 19:58:02.23399465 +0000 UTC m=+1167.016114174" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.242779 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243050 5007 scope.go:117] "RemoveContainer" containerID="d32ac9bc5c5b36cb1ccb6281c53243780f1146324542413a634ad1ffed7893a5" Feb 18 19:58:02 crc kubenswrapper[5007]: E0218 19:58:02.243579 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="proxy-httpd" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243601 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="proxy-httpd" Feb 18 19:58:02 crc kubenswrapper[5007]: E0218 19:58:02.243614 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerName="neutron-httpd" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243621 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerName="neutron-httpd" Feb 18 19:58:02 crc kubenswrapper[5007]: E0218 19:58:02.243634 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="ceilometer-notification-agent" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243643 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="ceilometer-notification-agent" Feb 18 19:58:02 crc kubenswrapper[5007]: E0218 19:58:02.243675 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="sg-core" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243683 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="sg-core" Feb 18 19:58:02 crc kubenswrapper[5007]: E0218 19:58:02.243697 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerName="neutron-api" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243704 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerName="neutron-api" Feb 18 19:58:02 crc kubenswrapper[5007]: E0218 19:58:02.243720 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="ceilometer-central-agent" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243727 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="ceilometer-central-agent" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243940 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="ceilometer-central-agent" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243958 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="proxy-httpd" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243975 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="ceilometer-notification-agent" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.243998 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerName="neutron-httpd" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.244011 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" containerName="sg-core" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.244024 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5926c07-fe6b-49a1-bb8a-6b630aa69d76" containerName="neutron-api" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.246689 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.250047 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.250237 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.251378 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.307231 5007 scope.go:117] "RemoveContainer" containerID="e25f8dd04fdc96b8c79c3a3b3ffbda2a13cd122fbdf43fc59b942944901370d1" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.365281 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.365336 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-config-data\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.365372 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-scripts\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.365388 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-log-httpd\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.365456 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxk9m\" (UniqueName: \"kubernetes.io/projected/ed11bf22-9b65-49f2-9e06-ec749324d752-kube-api-access-cxk9m\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.365509 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-run-httpd\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.365525 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.467118 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.467169 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-config-data\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.467209 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-scripts\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.467225 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-log-httpd\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.467294 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxk9m\" (UniqueName: \"kubernetes.io/projected/ed11bf22-9b65-49f2-9e06-ec749324d752-kube-api-access-cxk9m\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.467346 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-run-httpd\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.467365 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.468143 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-log-httpd\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.468231 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-run-httpd\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.478027 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.478233 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.478397 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-scripts\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.483417 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-config-data\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.489202 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxk9m\" (UniqueName: \"kubernetes.io/projected/ed11bf22-9b65-49f2-9e06-ec749324d752-kube-api-access-cxk9m\") pod \"ceilometer-0\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " pod="openstack/ceilometer-0" Feb 18 19:58:02 crc kubenswrapper[5007]: I0218 19:58:02.574197 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:58:03 crc kubenswrapper[5007]: I0218 19:58:03.048795 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:03 crc kubenswrapper[5007]: I0218 19:58:03.227945 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerStarted","Data":"2bd28b7151937cd8e9f17e284048890757e44b7964454e81437484799389c8f8"} Feb 18 19:58:03 crc kubenswrapper[5007]: I0218 19:58:03.762081 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:58:03 crc kubenswrapper[5007]: I0218 19:58:03.764631 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f7b76fc49-wc8vb" Feb 18 19:58:03 crc kubenswrapper[5007]: I0218 19:58:03.919053 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e68c1d-c428-4b53-8e46-7f179faddc5c" path="/var/lib/kubelet/pods/86e68c1d-c428-4b53-8e46-7f179faddc5c/volumes" Feb 18 19:58:03 crc kubenswrapper[5007]: I0218 19:58:03.970595 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:04 crc kubenswrapper[5007]: I0218 19:58:04.261697 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerStarted","Data":"50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63"} Feb 18 19:58:04 crc kubenswrapper[5007]: I0218 19:58:04.261809 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerStarted","Data":"70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43"} Feb 18 19:58:04 crc kubenswrapper[5007]: I0218 19:58:04.261830 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerStarted","Data":"673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967"} Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.291035 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerStarted","Data":"871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890"} Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.291225 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="ceilometer-central-agent" containerID="cri-o://673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967" gracePeriod=30 Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.291298 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="ceilometer-notification-agent" containerID="cri-o://70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43" gracePeriod=30 Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.291291 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="sg-core" containerID="cri-o://50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63" gracePeriod=30 Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.291350 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="proxy-httpd" containerID="cri-o://871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890" gracePeriod=30 Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.292017 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.315580 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.3902416579999999 podStartE2EDuration="5.315557879s" podCreationTimestamp="2026-02-18 19:58:02 +0000 UTC" firstStartedPulling="2026-02-18 19:58:03.063090357 +0000 UTC m=+1167.845209871" lastFinishedPulling="2026-02-18 19:58:06.988406568 +0000 UTC m=+1171.770526092" observedRunningTime="2026-02-18 19:58:07.312570683 +0000 UTC m=+1172.094690207" watchObservedRunningTime="2026-02-18 19:58:07.315557879 +0000 UTC m=+1172.097677393" Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.928301 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:07 crc kubenswrapper[5007]: I0218 19:58:07.961908 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.302100 5007 generic.go:334] "Generic (PLEG): container finished" podID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerID="50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63" exitCode=2 Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.302137 5007 generic.go:334] "Generic (PLEG): container finished" podID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerID="70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43" exitCode=0 Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.302380 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerDied","Data":"50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63"} Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.302466 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerDied","Data":"70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43"} Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.302636 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.338308 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.378958 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.794713 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.795349 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerName="glance-log" containerID="cri-o://9a89c0d2e562f5ff4df3f15db7317e07d5f4d3e0a6e908649e6e2c9127a84298" gracePeriod=30 Feb 18 19:58:08 crc kubenswrapper[5007]: I0218 19:58:08.795510 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerName="glance-httpd" containerID="cri-o://75ab128b0d92742a8fd6e43abd579e86b53cadc0d777c173e1ba83ee74ef0139" gracePeriod=30 Feb 18 19:58:09 crc kubenswrapper[5007]: I0218 19:58:09.313185 5007 generic.go:334] "Generic (PLEG): container finished" podID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerID="9a89c0d2e562f5ff4df3f15db7317e07d5f4d3e0a6e908649e6e2c9127a84298" exitCode=143 Feb 18 19:58:09 crc kubenswrapper[5007]: I0218 19:58:09.313301 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b43ec936-7a25-476a-8f4d-a5734f6f6044","Type":"ContainerDied","Data":"9a89c0d2e562f5ff4df3f15db7317e07d5f4d3e0a6e908649e6e2c9127a84298"} Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.324645 5007 generic.go:334] "Generic (PLEG): container finished" podID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerID="75ab128b0d92742a8fd6e43abd579e86b53cadc0d777c173e1ba83ee74ef0139" exitCode=0 Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.325240 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" containerID="cri-o://90095ecad4ac73cf9c323a5cbf00de3632fcfc796a1d133cfb0f4ac1088cf620" gracePeriod=30 Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.325762 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b43ec936-7a25-476a-8f4d-a5734f6f6044","Type":"ContainerDied","Data":"75ab128b0d92742a8fd6e43abd579e86b53cadc0d777c173e1ba83ee74ef0139"} Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.753049 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.823461 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.823657 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="232fe30c-bc2a-4490-a02d-edfdf60baceb" containerName="watcher-applier" containerID="cri-o://16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200" gracePeriod=30 Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.852802 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b43ec936-7a25-476a-8f4d-a5734f6f6044\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.852867 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-combined-ca-bundle\") pod \"b43ec936-7a25-476a-8f4d-a5734f6f6044\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.852939 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-config-data\") pod \"b43ec936-7a25-476a-8f4d-a5734f6f6044\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.852963 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-httpd-run\") pod \"b43ec936-7a25-476a-8f4d-a5734f6f6044\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.853022 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-logs\") pod \"b43ec936-7a25-476a-8f4d-a5734f6f6044\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.853046 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6p58\" (UniqueName: \"kubernetes.io/projected/b43ec936-7a25-476a-8f4d-a5734f6f6044-kube-api-access-b6p58\") pod \"b43ec936-7a25-476a-8f4d-a5734f6f6044\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.853122 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-scripts\") pod \"b43ec936-7a25-476a-8f4d-a5734f6f6044\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.853163 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-internal-tls-certs\") pod \"b43ec936-7a25-476a-8f4d-a5734f6f6044\" (UID: \"b43ec936-7a25-476a-8f4d-a5734f6f6044\") " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.853292 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b43ec936-7a25-476a-8f4d-a5734f6f6044" (UID: "b43ec936-7a25-476a-8f4d-a5734f6f6044"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.853487 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-logs" (OuterVolumeSpecName: "logs") pod "b43ec936-7a25-476a-8f4d-a5734f6f6044" (UID: "b43ec936-7a25-476a-8f4d-a5734f6f6044"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.853890 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.853909 5007 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43ec936-7a25-476a-8f4d-a5734f6f6044-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.864991 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-scripts" (OuterVolumeSpecName: "scripts") pod "b43ec936-7a25-476a-8f4d-a5734f6f6044" (UID: "b43ec936-7a25-476a-8f4d-a5734f6f6044"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.880544 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "b43ec936-7a25-476a-8f4d-a5734f6f6044" (UID: "b43ec936-7a25-476a-8f4d-a5734f6f6044"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.889585 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43ec936-7a25-476a-8f4d-a5734f6f6044-kube-api-access-b6p58" (OuterVolumeSpecName: "kube-api-access-b6p58") pod "b43ec936-7a25-476a-8f4d-a5734f6f6044" (UID: "b43ec936-7a25-476a-8f4d-a5734f6f6044"). InnerVolumeSpecName "kube-api-access-b6p58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.893574 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.893983 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerName="watcher-api-log" containerID="cri-o://5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06" gracePeriod=30 Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.894472 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerName="watcher-api" containerID="cri-o://16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc" gracePeriod=30 Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.898490 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b43ec936-7a25-476a-8f4d-a5734f6f6044" (UID: "b43ec936-7a25-476a-8f4d-a5734f6f6044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.955365 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6p58\" (UniqueName: \"kubernetes.io/projected/b43ec936-7a25-476a-8f4d-a5734f6f6044-kube-api-access-b6p58\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.955395 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.955416 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.955438 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.978192 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-config-data" (OuterVolumeSpecName: "config-data") pod "b43ec936-7a25-476a-8f4d-a5734f6f6044" (UID: "b43ec936-7a25-476a-8f4d-a5734f6f6044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.984535 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b43ec936-7a25-476a-8f4d-a5734f6f6044" (UID: "b43ec936-7a25-476a-8f4d-a5734f6f6044"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:10 crc kubenswrapper[5007]: I0218 19:58:10.987362 5007 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.057279 5007 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.058061 5007 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.058071 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43ec936-7a25-476a-8f4d-a5734f6f6044-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.335014 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b43ec936-7a25-476a-8f4d-a5734f6f6044","Type":"ContainerDied","Data":"cb920f6390840226cc88b8eddd70fbbda238f5845e2058e8c27ee401e7c82195"} Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.335051 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.336254 5007 scope.go:117] "RemoveContainer" containerID="75ab128b0d92742a8fd6e43abd579e86b53cadc0d777c173e1ba83ee74ef0139" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.337121 5007 generic.go:334] "Generic (PLEG): container finished" podID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerID="5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06" exitCode=143 Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.337162 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"627b8aa5-8116-4367-b84f-2ea2c26a93b0","Type":"ContainerDied","Data":"5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06"} Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.358356 5007 scope.go:117] "RemoveContainer" containerID="9a89c0d2e562f5ff4df3f15db7317e07d5f4d3e0a6e908649e6e2c9127a84298" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.368224 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.384517 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.399352 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:58:11 crc kubenswrapper[5007]: E0218 19:58:11.399839 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerName="glance-log" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.399861 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerName="glance-log" Feb 18 19:58:11 crc kubenswrapper[5007]: E0218 19:58:11.399913 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerName="glance-httpd" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.399923 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerName="glance-httpd" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.400167 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerName="glance-log" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.400193 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" containerName="glance-httpd" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.401466 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.403209 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.403616 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.415113 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:58:11 crc kubenswrapper[5007]: E0218 19:58:11.513104 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:58:11 crc kubenswrapper[5007]: E0218 19:58:11.514515 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:58:11 crc kubenswrapper[5007]: E0218 19:58:11.515662 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:58:11 crc kubenswrapper[5007]: E0218 19:58:11.515696 5007 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="232fe30c-bc2a-4490-a02d-edfdf60baceb" containerName="watcher-applier" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.567703 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d830bcf7-4451-48ff-ac2e-e957f46f3891-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.567754 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqld\" (UniqueName: \"kubernetes.io/projected/d830bcf7-4451-48ff-ac2e-e957f46f3891-kube-api-access-kmqld\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.567776 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d830bcf7-4451-48ff-ac2e-e957f46f3891-logs\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.567832 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.567914 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.567966 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.568001 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.568020 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.669369 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.669696 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d830bcf7-4451-48ff-ac2e-e957f46f3891-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.669732 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmqld\" (UniqueName: \"kubernetes.io/projected/d830bcf7-4451-48ff-ac2e-e957f46f3891-kube-api-access-kmqld\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.669751 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d830bcf7-4451-48ff-ac2e-e957f46f3891-logs\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.669789 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.669860 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.669895 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.669931 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.672817 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d830bcf7-4451-48ff-ac2e-e957f46f3891-logs\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.673054 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d830bcf7-4451-48ff-ac2e-e957f46f3891-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.673593 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.673621 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.679895 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.682581 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.708155 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d830bcf7-4451-48ff-ac2e-e957f46f3891-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.708754 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmqld\" (UniqueName: \"kubernetes.io/projected/d830bcf7-4451-48ff-ac2e-e957f46f3891-kube-api-access-kmqld\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.753539 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d830bcf7-4451-48ff-ac2e-e957f46f3891\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.773606 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.850375 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.953363 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43ec936-7a25-476a-8f4d-a5734f6f6044" path="/var/lib/kubelet/pods/b43ec936-7a25-476a-8f4d-a5734f6f6044/volumes" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.976352 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-internal-tls-certs\") pod \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.976504 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbskm\" (UniqueName: \"kubernetes.io/projected/627b8aa5-8116-4367-b84f-2ea2c26a93b0-kube-api-access-rbskm\") pod \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.976593 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-public-tls-certs\") pod \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.976650 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-custom-prometheus-ca\") pod \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.976689 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-combined-ca-bundle\") pod \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.976712 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-config-data\") pod \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.976754 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/627b8aa5-8116-4367-b84f-2ea2c26a93b0-logs\") pod \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\" (UID: \"627b8aa5-8116-4367-b84f-2ea2c26a93b0\") " Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.978035 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/627b8aa5-8116-4367-b84f-2ea2c26a93b0-logs" (OuterVolumeSpecName: "logs") pod "627b8aa5-8116-4367-b84f-2ea2c26a93b0" (UID: "627b8aa5-8116-4367-b84f-2ea2c26a93b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:11 crc kubenswrapper[5007]: I0218 19:58:11.986826 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/627b8aa5-8116-4367-b84f-2ea2c26a93b0-kube-api-access-rbskm" (OuterVolumeSpecName: "kube-api-access-rbskm") pod "627b8aa5-8116-4367-b84f-2ea2c26a93b0" (UID: "627b8aa5-8116-4367-b84f-2ea2c26a93b0"). InnerVolumeSpecName "kube-api-access-rbskm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.062184 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "627b8aa5-8116-4367-b84f-2ea2c26a93b0" (UID: "627b8aa5-8116-4367-b84f-2ea2c26a93b0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.070580 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "627b8aa5-8116-4367-b84f-2ea2c26a93b0" (UID: "627b8aa5-8116-4367-b84f-2ea2c26a93b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.078691 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbskm\" (UniqueName: \"kubernetes.io/projected/627b8aa5-8116-4367-b84f-2ea2c26a93b0-kube-api-access-rbskm\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.083059 5007 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.083076 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.083086 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/627b8aa5-8116-4367-b84f-2ea2c26a93b0-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.091502 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-config-data" (OuterVolumeSpecName: "config-data") pod "627b8aa5-8116-4367-b84f-2ea2c26a93b0" (UID: "627b8aa5-8116-4367-b84f-2ea2c26a93b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.094741 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "627b8aa5-8116-4367-b84f-2ea2c26a93b0" (UID: "627b8aa5-8116-4367-b84f-2ea2c26a93b0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.096325 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "627b8aa5-8116-4367-b84f-2ea2c26a93b0" (UID: "627b8aa5-8116-4367-b84f-2ea2c26a93b0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.184953 5007 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.184992 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.185009 5007 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/627b8aa5-8116-4367-b84f-2ea2c26a93b0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.356933 5007 generic.go:334] "Generic (PLEG): container finished" podID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerID="16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc" exitCode=0 Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.357009 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.357042 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"627b8aa5-8116-4367-b84f-2ea2c26a93b0","Type":"ContainerDied","Data":"16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc"} Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.357099 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"627b8aa5-8116-4367-b84f-2ea2c26a93b0","Type":"ContainerDied","Data":"2854fd392e3ac4927ebd3d6d2d1c7ad61a40313c1f2d1cfc1867f13227309eb2"} Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.357125 5007 scope.go:117] "RemoveContainer" containerID="16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.359781 5007 generic.go:334] "Generic (PLEG): container finished" podID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerID="90095ecad4ac73cf9c323a5cbf00de3632fcfc796a1d133cfb0f4ac1088cf620" exitCode=0 Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.359813 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerDied","Data":"90095ecad4ac73cf9c323a5cbf00de3632fcfc796a1d133cfb0f4ac1088cf620"} Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.410606 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.412586 5007 scope.go:117] "RemoveContainer" containerID="5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.431450 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.460492 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:58:12 crc kubenswrapper[5007]: E0218 19:58:12.460904 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerName="watcher-api-log" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.460923 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerName="watcher-api-log" Feb 18 19:58:12 crc kubenswrapper[5007]: E0218 19:58:12.460957 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerName="watcher-api" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.460964 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerName="watcher-api" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.461167 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerName="watcher-api-log" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.461202 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" containerName="watcher-api" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.462300 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.468594 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.468833 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.469152 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.472923 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.504890 5007 scope.go:117] "RemoveContainer" containerID="16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc" Feb 18 19:58:12 crc kubenswrapper[5007]: E0218 19:58:12.505886 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc\": container with ID starting with 16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc not found: ID does not exist" containerID="16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.505931 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc"} err="failed to get container status \"16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc\": rpc error: code = NotFound desc = could not find container \"16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc\": container with ID starting with 16ef8d72a7131557af02f504b048aecff1e10a7e00528c6c1ed4dbf86c0f98dc not found: ID does not exist" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.505957 5007 scope.go:117] "RemoveContainer" containerID="5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06" Feb 18 19:58:12 crc kubenswrapper[5007]: E0218 19:58:12.506891 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06\": container with ID starting with 5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06 not found: ID does not exist" containerID="5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.506924 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06"} err="failed to get container status \"5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06\": rpc error: code = NotFound desc = could not find container \"5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06\": container with ID starting with 5beada5be70c4532143f2ceabdc9c604258b83b69ca90346547ac35160841e06 not found: ID does not exist" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.506946 5007 scope.go:117] "RemoveContainer" containerID="4c818401da43dcc1d8de232b852f7b2225dbd8e458d74bde2cedba9ed105f488" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.566490 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.594403 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b08ee157-bd44-45d0-a374-f48c5b9975fd-logs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.594663 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.594693 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.594711 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.594729 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-config-data\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.594779 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.594863 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58snn\" (UniqueName: \"kubernetes.io/projected/b08ee157-bd44-45d0-a374-f48c5b9975fd-kube-api-access-58snn\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.696850 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b08ee157-bd44-45d0-a374-f48c5b9975fd-logs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.696900 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.696944 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.696976 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.696996 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-config-data\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.697075 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.697253 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58snn\" (UniqueName: \"kubernetes.io/projected/b08ee157-bd44-45d0-a374-f48c5b9975fd-kube-api-access-58snn\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.697621 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b08ee157-bd44-45d0-a374-f48c5b9975fd-logs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.704678 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-config-data\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.704730 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.705379 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.706082 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.710490 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b08ee157-bd44-45d0-a374-f48c5b9975fd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.724130 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58snn\" (UniqueName: \"kubernetes.io/projected/b08ee157-bd44-45d0-a374-f48c5b9975fd-kube-api-access-58snn\") pod \"watcher-api-0\" (UID: \"b08ee157-bd44-45d0-a374-f48c5b9975fd\") " pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.799810 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:58:12 crc kubenswrapper[5007]: I0218 19:58:12.916812 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.003115 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-custom-prometheus-ca\") pod \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.003277 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-config-data\") pod \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.003298 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-combined-ca-bundle\") pod \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.003379 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psr6p\" (UniqueName: \"kubernetes.io/projected/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-kube-api-access-psr6p\") pod \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.003483 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-logs\") pod \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\" (UID: \"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.008264 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-logs" (OuterVolumeSpecName: "logs") pod "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" (UID: "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.009868 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-kube-api-access-psr6p" (OuterVolumeSpecName: "kube-api-access-psr6p") pod "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" (UID: "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617"). InnerVolumeSpecName "kube-api-access-psr6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.034778 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" (UID: "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.073357 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" (UID: "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.080780 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-config-data" (OuterVolumeSpecName: "config-data") pod "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" (UID: "2d7e3b6c-20a8-4901-8cb2-b856b9e9b617"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.106159 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.106187 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.106198 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psr6p\" (UniqueName: \"kubernetes.io/projected/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-kube-api-access-psr6p\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.106209 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.106217 5007 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.309284 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.313929 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:58:13 crc kubenswrapper[5007]: W0218 19:58:13.319150 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08ee157_bd44_45d0_a374_f48c5b9975fd.slice/crio-97d4ecc2e82d2d49206fd861869271e04c3c84c2d79e89cd5bdfbf7c973d9734 WatchSource:0}: Error finding container 97d4ecc2e82d2d49206fd861869271e04c3c84c2d79e89cd5bdfbf7c973d9734: Status 404 returned error can't find the container with id 97d4ecc2e82d2d49206fd861869271e04c3c84c2d79e89cd5bdfbf7c973d9734 Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.384485 5007 generic.go:334] "Generic (PLEG): container finished" podID="232fe30c-bc2a-4490-a02d-edfdf60baceb" containerID="16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200" exitCode=0 Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.384557 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"232fe30c-bc2a-4490-a02d-edfdf60baceb","Type":"ContainerDied","Data":"16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200"} Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.384584 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"232fe30c-bc2a-4490-a02d-edfdf60baceb","Type":"ContainerDied","Data":"d7bc6709977a9217e8259125ef4077ab47610f04ed2ea7c5c733be7f018db7c9"} Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.384602 5007 scope.go:117] "RemoveContainer" containerID="16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.384695 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.390547 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2d7e3b6c-20a8-4901-8cb2-b856b9e9b617","Type":"ContainerDied","Data":"10eb1444b59b9689a40715e0268b5af46b36215ea83e0e35da2c4b6dadec5aeb"} Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.390622 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.411629 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/232fe30c-bc2a-4490-a02d-edfdf60baceb-logs\") pod \"232fe30c-bc2a-4490-a02d-edfdf60baceb\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.411926 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-config-data\") pod \"232fe30c-bc2a-4490-a02d-edfdf60baceb\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.411985 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74m6b\" (UniqueName: \"kubernetes.io/projected/232fe30c-bc2a-4490-a02d-edfdf60baceb-kube-api-access-74m6b\") pod \"232fe30c-bc2a-4490-a02d-edfdf60baceb\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.412119 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-combined-ca-bundle\") pod \"232fe30c-bc2a-4490-a02d-edfdf60baceb\" (UID: \"232fe30c-bc2a-4490-a02d-edfdf60baceb\") " Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.412155 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232fe30c-bc2a-4490-a02d-edfdf60baceb-logs" (OuterVolumeSpecName: "logs") pod "232fe30c-bc2a-4490-a02d-edfdf60baceb" (UID: "232fe30c-bc2a-4490-a02d-edfdf60baceb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.412646 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/232fe30c-bc2a-4490-a02d-edfdf60baceb-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.425270 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d830bcf7-4451-48ff-ac2e-e957f46f3891","Type":"ContainerStarted","Data":"b80ff69b11a60a5e24e5362dc4c562be9cfcb46b69aa4ae8f8c0cbfbaa99f38b"} Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.425317 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d830bcf7-4451-48ff-ac2e-e957f46f3891","Type":"ContainerStarted","Data":"ce544c45f67b800798a0bce10b65236bc5dc4c2ca6338c6a647b16ed8a156f12"} Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.425585 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232fe30c-bc2a-4490-a02d-edfdf60baceb-kube-api-access-74m6b" (OuterVolumeSpecName: "kube-api-access-74m6b") pod "232fe30c-bc2a-4490-a02d-edfdf60baceb" (UID: "232fe30c-bc2a-4490-a02d-edfdf60baceb"). InnerVolumeSpecName "kube-api-access-74m6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.436557 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b08ee157-bd44-45d0-a374-f48c5b9975fd","Type":"ContainerStarted","Data":"97d4ecc2e82d2d49206fd861869271e04c3c84c2d79e89cd5bdfbf7c973d9734"} Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.443140 5007 scope.go:117] "RemoveContainer" containerID="16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200" Feb 18 19:58:13 crc kubenswrapper[5007]: E0218 19:58:13.444548 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200\": container with ID starting with 16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200 not found: ID does not exist" containerID="16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.444577 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200"} err="failed to get container status \"16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200\": rpc error: code = NotFound desc = could not find container \"16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200\": container with ID starting with 16e01b2b1f26b4b1a68a98426387fd8ff3fa81593ab4cd9e7efcccdf9aa0b200 not found: ID does not exist" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.444595 5007 scope.go:117] "RemoveContainer" containerID="90095ecad4ac73cf9c323a5cbf00de3632fcfc796a1d133cfb0f4ac1088cf620" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.463579 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "232fe30c-bc2a-4490-a02d-edfdf60baceb" (UID: "232fe30c-bc2a-4490-a02d-edfdf60baceb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.472906 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.488559 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.509693 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:58:13 crc kubenswrapper[5007]: E0218 19:58:13.510233 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.510251 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: E0218 19:58:13.510265 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232fe30c-bc2a-4490-a02d-edfdf60baceb" containerName="watcher-applier" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.510271 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="232fe30c-bc2a-4490-a02d-edfdf60baceb" containerName="watcher-applier" Feb 18 19:58:13 crc kubenswrapper[5007]: E0218 19:58:13.510302 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.510310 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: E0218 19:58:13.510326 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.510331 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.510492 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.510510 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.510523 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="232fe30c-bc2a-4490-a02d-edfdf60baceb" containerName="watcher-applier" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.511352 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.512210 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-config-data" (OuterVolumeSpecName: "config-data") pod "232fe30c-bc2a-4490-a02d-edfdf60baceb" (UID: "232fe30c-bc2a-4490-a02d-edfdf60baceb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.514100 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.514134 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74m6b\" (UniqueName: \"kubernetes.io/projected/232fe30c-bc2a-4490-a02d-edfdf60baceb-kube-api-access-74m6b\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.514150 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232fe30c-bc2a-4490-a02d-edfdf60baceb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.521021 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.537633 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.615872 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.616394 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-config-data\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.616586 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-logs\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.616802 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppl4s\" (UniqueName: \"kubernetes.io/projected/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-kube-api-access-ppl4s\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.616963 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.718730 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.718814 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-config-data\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.718854 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-logs\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.718937 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppl4s\" (UniqueName: \"kubernetes.io/projected/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-kube-api-access-ppl4s\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.718965 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.719564 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-logs\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.724541 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.725516 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-config-data\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.725857 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.736934 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppl4s\" (UniqueName: \"kubernetes.io/projected/5bc0e870-e6b8-4c63-aacd-d7a2b389a64a-kube-api-access-ppl4s\") pod \"watcher-decision-engine-0\" (UID: \"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.831349 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.863097 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.878576 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.963608 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232fe30c-bc2a-4490-a02d-edfdf60baceb" path="/var/lib/kubelet/pods/232fe30c-bc2a-4490-a02d-edfdf60baceb/volumes" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.964590 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" path="/var/lib/kubelet/pods/2d7e3b6c-20a8-4901-8cb2-b856b9e9b617/volumes" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.965363 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="627b8aa5-8116-4367-b84f-2ea2c26a93b0" path="/var/lib/kubelet/pods/627b8aa5-8116-4367-b84f-2ea2c26a93b0/volumes" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.972824 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:58:13 crc kubenswrapper[5007]: E0218 19:58:13.973586 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.973627 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.974009 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.974919 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.977457 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:58:13 crc kubenswrapper[5007]: I0218 19:58:13.986258 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.038215 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be06aeb6-c370-406f-a31e-891e3d4be82b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.038666 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be06aeb6-c370-406f-a31e-891e3d4be82b-config-data\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.038754 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be06aeb6-c370-406f-a31e-891e3d4be82b-logs\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.038799 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn575\" (UniqueName: \"kubernetes.io/projected/be06aeb6-c370-406f-a31e-891e3d4be82b-kube-api-access-qn575\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.143325 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be06aeb6-c370-406f-a31e-891e3d4be82b-config-data\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.143402 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be06aeb6-c370-406f-a31e-891e3d4be82b-logs\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.143446 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn575\" (UniqueName: \"kubernetes.io/projected/be06aeb6-c370-406f-a31e-891e3d4be82b-kube-api-access-qn575\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.143568 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be06aeb6-c370-406f-a31e-891e3d4be82b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.144700 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be06aeb6-c370-406f-a31e-891e3d4be82b-logs\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.163564 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be06aeb6-c370-406f-a31e-891e3d4be82b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.166068 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn575\" (UniqueName: \"kubernetes.io/projected/be06aeb6-c370-406f-a31e-891e3d4be82b-kube-api-access-qn575\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.166768 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be06aeb6-c370-406f-a31e-891e3d4be82b-config-data\") pod \"watcher-applier-0\" (UID: \"be06aeb6-c370-406f-a31e-891e3d4be82b\") " pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.342371 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.417112 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:58:14 crc kubenswrapper[5007]: W0218 19:58:14.427647 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc0e870_e6b8_4c63_aacd_d7a2b389a64a.slice/crio-6b11a24650e1e11cdca566d90e89cb178d649e2e0e364d269420be4e39a3046c WatchSource:0}: Error finding container 6b11a24650e1e11cdca566d90e89cb178d649e2e0e364d269420be4e39a3046c: Status 404 returned error can't find the container with id 6b11a24650e1e11cdca566d90e89cb178d649e2e0e364d269420be4e39a3046c Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.460793 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d830bcf7-4451-48ff-ac2e-e957f46f3891","Type":"ContainerStarted","Data":"221dedaf566a76550f9a2fd9dd39724632d84c7b723eb072a6e70ee1faec898f"} Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.463118 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a","Type":"ContainerStarted","Data":"6b11a24650e1e11cdca566d90e89cb178d649e2e0e364d269420be4e39a3046c"} Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.464612 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b08ee157-bd44-45d0-a374-f48c5b9975fd","Type":"ContainerStarted","Data":"c7b02fc338dc0a0693776e7399f2ba7614d2329c9ab77d59b90edeb97df41f6c"} Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.464661 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b08ee157-bd44-45d0-a374-f48c5b9975fd","Type":"ContainerStarted","Data":"ed673761d5501d3742943eedc2d0d962ad327fcb043430309d2147ac6fbc4c05"} Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.465206 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.500947 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.500925397 podStartE2EDuration="3.500925397s" podCreationTimestamp="2026-02-18 19:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:14.489742857 +0000 UTC m=+1179.271862381" watchObservedRunningTime="2026-02-18 19:58:14.500925397 +0000 UTC m=+1179.283044921" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.522388 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.522371432 podStartE2EDuration="2.522371432s" podCreationTimestamp="2026-02-18 19:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:14.511018727 +0000 UTC m=+1179.293138251" watchObservedRunningTime="2026-02-18 19:58:14.522371432 +0000 UTC m=+1179.304490956" Feb 18 19:58:14 crc kubenswrapper[5007]: I0218 19:58:14.820903 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:58:14 crc kubenswrapper[5007]: W0218 19:58:14.825629 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe06aeb6_c370_406f_a31e_891e3d4be82b.slice/crio-10a7fb5e59c565973c12fbb2bb689aacf9e651c7f247d1f89d1c9539335fb5e2 WatchSource:0}: Error finding container 10a7fb5e59c565973c12fbb2bb689aacf9e651c7f247d1f89d1c9539335fb5e2: Status 404 returned error can't find the container with id 10a7fb5e59c565973c12fbb2bb689aacf9e651c7f247d1f89d1c9539335fb5e2 Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.477238 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"be06aeb6-c370-406f-a31e-891e3d4be82b","Type":"ContainerStarted","Data":"ffaf8f5ff2511f219d3de4dfa030ed3f8a5ca30e18fa52e515a8425033b8847f"} Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.477377 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"be06aeb6-c370-406f-a31e-891e3d4be82b","Type":"ContainerStarted","Data":"10a7fb5e59c565973c12fbb2bb689aacf9e651c7f247d1f89d1c9539335fb5e2"} Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.480030 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"5bc0e870-e6b8-4c63-aacd-d7a2b389a64a","Type":"ContainerStarted","Data":"0f8cc1e8dd3bd8ecf0efcad10b921cb7762efa15bad8cd5dbb39436f35816ef7"} Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.519341 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.519311937 podStartE2EDuration="2.519311937s" podCreationTimestamp="2026-02-18 19:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:15.49077301 +0000 UTC m=+1180.272892534" watchObservedRunningTime="2026-02-18 19:58:15.519311937 +0000 UTC m=+1180.301431461" Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.537291 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.5372728110000002 podStartE2EDuration="2.537272811s" podCreationTimestamp="2026-02-18 19:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:15.516519927 +0000 UTC m=+1180.298639441" watchObservedRunningTime="2026-02-18 19:58:15.537272811 +0000 UTC m=+1180.319392335" Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.664375 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.664475 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.664531 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.665309 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"735004cd6ccf501f03eefba61a70e42cc068351a26665fe459f03c839d84882e"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:58:15 crc kubenswrapper[5007]: I0218 19:58:15.665369 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://735004cd6ccf501f03eefba61a70e42cc068351a26665fe459f03c839d84882e" gracePeriod=600 Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.475801 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.476656 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerName="glance-log" containerID="cri-o://5a7f435e5054236a3e70856dbaa01ee68eb64cba98fc4214bb88521857486b4c" gracePeriod=30 Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.476770 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerName="glance-httpd" containerID="cri-o://748a01dc1585cdee74af6c83fbe1e9b37faac76a7ac811dbf84751d32d0ee76a" gracePeriod=30 Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.496817 5007 generic.go:334] "Generic (PLEG): container finished" podID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerID="673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967" exitCode=0 Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.497545 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerDied","Data":"673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967"} Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.511577 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="735004cd6ccf501f03eefba61a70e42cc068351a26665fe459f03c839d84882e" exitCode=0 Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.512691 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"735004cd6ccf501f03eefba61a70e42cc068351a26665fe459f03c839d84882e"} Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.512800 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"2fca2ab9c592f01755a09509912ed2a7ab21cefa3453d87cc11d33a8e4e411bb"} Feb 18 19:58:16 crc kubenswrapper[5007]: I0218 19:58:16.512870 5007 scope.go:117] "RemoveContainer" containerID="0761fcff0a059a787d7d4e6ecd66cfd18d646e97ccddeb6ea48208f9ee32bdee" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.305239 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.531517 5007 generic.go:334] "Generic (PLEG): container finished" podID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerID="748a01dc1585cdee74af6c83fbe1e9b37faac76a7ac811dbf84751d32d0ee76a" exitCode=0 Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.531552 5007 generic.go:334] "Generic (PLEG): container finished" podID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerID="5a7f435e5054236a3e70856dbaa01ee68eb64cba98fc4214bb88521857486b4c" exitCode=143 Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.531643 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcdb880d-97da-4651-983e-8ccef0ea3d51","Type":"ContainerDied","Data":"748a01dc1585cdee74af6c83fbe1e9b37faac76a7ac811dbf84751d32d0ee76a"} Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.531720 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcdb880d-97da-4651-983e-8ccef0ea3d51","Type":"ContainerDied","Data":"5a7f435e5054236a3e70856dbaa01ee68eb64cba98fc4214bb88521857486b4c"} Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.800275 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.851203 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.915348 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-scripts\") pod \"fcdb880d-97da-4651-983e-8ccef0ea3d51\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.915386 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzwhw\" (UniqueName: \"kubernetes.io/projected/fcdb880d-97da-4651-983e-8ccef0ea3d51-kube-api-access-kzwhw\") pod \"fcdb880d-97da-4651-983e-8ccef0ea3d51\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.915418 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-combined-ca-bundle\") pod \"fcdb880d-97da-4651-983e-8ccef0ea3d51\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.915453 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-public-tls-certs\") pod \"fcdb880d-97da-4651-983e-8ccef0ea3d51\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.915525 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-logs\") pod \"fcdb880d-97da-4651-983e-8ccef0ea3d51\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.915571 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"fcdb880d-97da-4651-983e-8ccef0ea3d51\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.915703 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-config-data\") pod \"fcdb880d-97da-4651-983e-8ccef0ea3d51\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.915736 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-httpd-run\") pod \"fcdb880d-97da-4651-983e-8ccef0ea3d51\" (UID: \"fcdb880d-97da-4651-983e-8ccef0ea3d51\") " Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.917963 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-logs" (OuterVolumeSpecName: "logs") pod "fcdb880d-97da-4651-983e-8ccef0ea3d51" (UID: "fcdb880d-97da-4651-983e-8ccef0ea3d51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.922421 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcdb880d-97da-4651-983e-8ccef0ea3d51-kube-api-access-kzwhw" (OuterVolumeSpecName: "kube-api-access-kzwhw") pod "fcdb880d-97da-4651-983e-8ccef0ea3d51" (UID: "fcdb880d-97da-4651-983e-8ccef0ea3d51"). InnerVolumeSpecName "kube-api-access-kzwhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.927748 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "fcdb880d-97da-4651-983e-8ccef0ea3d51" (UID: "fcdb880d-97da-4651-983e-8ccef0ea3d51"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.928672 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-scripts" (OuterVolumeSpecName: "scripts") pod "fcdb880d-97da-4651-983e-8ccef0ea3d51" (UID: "fcdb880d-97da-4651-983e-8ccef0ea3d51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.931005 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fcdb880d-97da-4651-983e-8ccef0ea3d51" (UID: "fcdb880d-97da-4651-983e-8ccef0ea3d51"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:17 crc kubenswrapper[5007]: I0218 19:58:17.957455 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcdb880d-97da-4651-983e-8ccef0ea3d51" (UID: "fcdb880d-97da-4651-983e-8ccef0ea3d51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.009324 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-config-data" (OuterVolumeSpecName: "config-data") pod "fcdb880d-97da-4651-983e-8ccef0ea3d51" (UID: "fcdb880d-97da-4651-983e-8ccef0ea3d51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.015407 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fcdb880d-97da-4651-983e-8ccef0ea3d51" (UID: "fcdb880d-97da-4651-983e-8ccef0ea3d51"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.018768 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.018800 5007 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.018811 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.018822 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzwhw\" (UniqueName: \"kubernetes.io/projected/fcdb880d-97da-4651-983e-8ccef0ea3d51-kube-api-access-kzwhw\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.018834 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.018842 5007 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcdb880d-97da-4651-983e-8ccef0ea3d51-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.018850 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcdb880d-97da-4651-983e-8ccef0ea3d51-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.018875 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.047324 5007 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.120487 5007 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.553360 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcdb880d-97da-4651-983e-8ccef0ea3d51","Type":"ContainerDied","Data":"704fcd518164747a9c307c2a17c09f89da5c9fe240e8604af41dd1f96181671c"} Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.553414 5007 scope.go:117] "RemoveContainer" containerID="748a01dc1585cdee74af6c83fbe1e9b37faac76a7ac811dbf84751d32d0ee76a" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.553446 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.581722 5007 scope.go:117] "RemoveContainer" containerID="5a7f435e5054236a3e70856dbaa01ee68eb64cba98fc4214bb88521857486b4c" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.599072 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.616205 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.628009 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:58:18 crc kubenswrapper[5007]: E0218 19:58:18.628496 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerName="glance-log" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.628519 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerName="glance-log" Feb 18 19:58:18 crc kubenswrapper[5007]: E0218 19:58:18.628562 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerName="glance-httpd" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.628571 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerName="glance-httpd" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.629948 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerName="glance-httpd" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.629977 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7e3b6c-20a8-4901-8cb2-b856b9e9b617" containerName="watcher-decision-engine" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.629997 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" containerName="glance-log" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.630995 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.633313 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.633878 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.637525 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.731229 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.731339 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-scripts\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.731364 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-config-data\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.731449 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4kj2\" (UniqueName: \"kubernetes.io/projected/56b3fc98-cc91-4ceb-83b4-6311062abd32-kube-api-access-k4kj2\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.731477 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.731498 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.731517 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56b3fc98-cc91-4ceb-83b4-6311062abd32-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.731545 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b3fc98-cc91-4ceb-83b4-6311062abd32-logs\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.833198 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.833294 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-scripts\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.833318 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-config-data\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.833415 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4kj2\" (UniqueName: \"kubernetes.io/projected/56b3fc98-cc91-4ceb-83b4-6311062abd32-kube-api-access-k4kj2\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.833470 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.833496 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.833522 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56b3fc98-cc91-4ceb-83b4-6311062abd32-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.833564 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b3fc98-cc91-4ceb-83b4-6311062abd32-logs\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.834147 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56b3fc98-cc91-4ceb-83b4-6311062abd32-logs\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.834192 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56b3fc98-cc91-4ceb-83b4-6311062abd32-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.834331 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.838680 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-scripts\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.839041 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.841038 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.841956 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56b3fc98-cc91-4ceb-83b4-6311062abd32-config-data\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.856392 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4kj2\" (UniqueName: \"kubernetes.io/projected/56b3fc98-cc91-4ceb-83b4-6311062abd32-kube-api-access-k4kj2\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.880057 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"56b3fc98-cc91-4ceb-83b4-6311062abd32\") " pod="openstack/glance-default-external-api-0" Feb 18 19:58:18 crc kubenswrapper[5007]: I0218 19:58:18.948887 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:58:19 crc kubenswrapper[5007]: I0218 19:58:19.342738 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:58:19 crc kubenswrapper[5007]: I0218 19:58:19.652995 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:58:19 crc kubenswrapper[5007]: I0218 19:58:19.919909 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcdb880d-97da-4651-983e-8ccef0ea3d51" path="/var/lib/kubelet/pods/fcdb880d-97da-4651-983e-8ccef0ea3d51/volumes" Feb 18 19:58:20 crc kubenswrapper[5007]: I0218 19:58:20.586622 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b3fc98-cc91-4ceb-83b4-6311062abd32","Type":"ContainerStarted","Data":"ad09a7196c362e283a764ee8967dbac7af76fb22596907d75dfde3183bea3c85"} Feb 18 19:58:20 crc kubenswrapper[5007]: I0218 19:58:20.587006 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b3fc98-cc91-4ceb-83b4-6311062abd32","Type":"ContainerStarted","Data":"485d900f29d15b7ec5e24795822bc931d632f0127e905832df97521416bce3ce"} Feb 18 19:58:21 crc kubenswrapper[5007]: I0218 19:58:21.602631 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56b3fc98-cc91-4ceb-83b4-6311062abd32","Type":"ContainerStarted","Data":"05ecb4c029a0ede8053882841c82ea366c17c43233067af7ac0060e71887b300"} Feb 18 19:58:21 crc kubenswrapper[5007]: I0218 19:58:21.641101 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.641072231 podStartE2EDuration="3.641072231s" podCreationTimestamp="2026-02-18 19:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:21.627159062 +0000 UTC m=+1186.409278616" watchObservedRunningTime="2026-02-18 19:58:21.641072231 +0000 UTC m=+1186.423191795" Feb 18 19:58:21 crc kubenswrapper[5007]: I0218 19:58:21.774609 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:21 crc kubenswrapper[5007]: I0218 19:58:21.774658 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:21 crc kubenswrapper[5007]: I0218 19:58:21.812628 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:21 crc kubenswrapper[5007]: I0218 19:58:21.823071 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:22 crc kubenswrapper[5007]: I0218 19:58:22.611560 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:22 crc kubenswrapper[5007]: I0218 19:58:22.611604 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:22 crc kubenswrapper[5007]: I0218 19:58:22.801021 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:58:22 crc kubenswrapper[5007]: I0218 19:58:22.809563 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:58:23 crc kubenswrapper[5007]: I0218 19:58:23.650850 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:58:23 crc kubenswrapper[5007]: I0218 19:58:23.831971 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:23 crc kubenswrapper[5007]: I0218 19:58:23.880349 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:24 crc kubenswrapper[5007]: I0218 19:58:24.343590 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:58:24 crc kubenswrapper[5007]: I0218 19:58:24.362375 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:24 crc kubenswrapper[5007]: I0218 19:58:24.403082 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:58:24 crc kubenswrapper[5007]: I0218 19:58:24.417771 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:58:24 crc kubenswrapper[5007]: I0218 19:58:24.632271 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:24 crc kubenswrapper[5007]: I0218 19:58:24.664535 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:58:24 crc kubenswrapper[5007]: I0218 19:58:24.665218 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.502959 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6csdv"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.504819 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.576947 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6csdv"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.610591 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-operator-scripts\") pod \"nova-api-db-create-6csdv\" (UID: \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\") " pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.610698 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vkx8\" (UniqueName: \"kubernetes.io/projected/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-kube-api-access-6vkx8\") pod \"nova-api-db-create-6csdv\" (UID: \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\") " pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.613101 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dgmfr"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.614411 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.636987 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dgmfr"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.705069 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e5eb-account-create-update-bxzlj"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.706272 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.712227 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-operator-scripts\") pod \"nova-api-db-create-6csdv\" (UID: \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\") " pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.712597 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.712958 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-operator-scripts\") pod \"nova-api-db-create-6csdv\" (UID: \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\") " pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.713015 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-operator-scripts\") pod \"nova-cell0-db-create-dgmfr\" (UID: \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\") " pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.713072 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6cv8\" (UniqueName: \"kubernetes.io/projected/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-kube-api-access-n6cv8\") pod \"nova-cell0-db-create-dgmfr\" (UID: \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\") " pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.713167 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vkx8\" (UniqueName: \"kubernetes.io/projected/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-kube-api-access-6vkx8\") pod \"nova-api-db-create-6csdv\" (UID: \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\") " pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.720557 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e5eb-account-create-update-bxzlj"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.736953 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vkx8\" (UniqueName: \"kubernetes.io/projected/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-kube-api-access-6vkx8\") pod \"nova-api-db-create-6csdv\" (UID: \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\") " pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.809779 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-68h4k"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.811542 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.815277 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6cv8\" (UniqueName: \"kubernetes.io/projected/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-kube-api-access-n6cv8\") pod \"nova-cell0-db-create-dgmfr\" (UID: \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\") " pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.815407 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c39b11-db3a-4cff-b4ca-bbb3a533065c-operator-scripts\") pod \"nova-api-e5eb-account-create-update-bxzlj\" (UID: \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\") " pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.815535 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xnr\" (UniqueName: \"kubernetes.io/projected/98c39b11-db3a-4cff-b4ca-bbb3a533065c-kube-api-access-62xnr\") pod \"nova-api-e5eb-account-create-update-bxzlj\" (UID: \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\") " pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.815557 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-operator-scripts\") pod \"nova-cell0-db-create-dgmfr\" (UID: \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\") " pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.816464 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-operator-scripts\") pod \"nova-cell0-db-create-dgmfr\" (UID: \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\") " pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.829719 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-68h4k"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.857570 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6cv8\" (UniqueName: \"kubernetes.io/projected/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-kube-api-access-n6cv8\") pod \"nova-cell0-db-create-dgmfr\" (UID: \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\") " pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.878810 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.917424 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c39b11-db3a-4cff-b4ca-bbb3a533065c-operator-scripts\") pod \"nova-api-e5eb-account-create-update-bxzlj\" (UID: \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\") " pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.917574 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xnr\" (UniqueName: \"kubernetes.io/projected/98c39b11-db3a-4cff-b4ca-bbb3a533065c-kube-api-access-62xnr\") pod \"nova-api-e5eb-account-create-update-bxzlj\" (UID: \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\") " pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.917697 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910c61d8-825c-4f88-923b-69e3e0d20388-operator-scripts\") pod \"nova-cell1-db-create-68h4k\" (UID: \"910c61d8-825c-4f88-923b-69e3e0d20388\") " pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.917784 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/910c61d8-825c-4f88-923b-69e3e0d20388-kube-api-access-t5sjj\") pod \"nova-cell1-db-create-68h4k\" (UID: \"910c61d8-825c-4f88-923b-69e3e0d20388\") " pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.918620 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c39b11-db3a-4cff-b4ca-bbb3a533065c-operator-scripts\") pod \"nova-api-e5eb-account-create-update-bxzlj\" (UID: \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\") " pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.918891 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1941-account-create-update-wdgcl"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.920612 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.923155 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.934279 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.935731 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1941-account-create-update-wdgcl"] Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.952228 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.952277 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.956748 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xnr\" (UniqueName: \"kubernetes.io/projected/98c39b11-db3a-4cff-b4ca-bbb3a533065c-kube-api-access-62xnr\") pod \"nova-api-e5eb-account-create-update-bxzlj\" (UID: \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\") " pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:28 crc kubenswrapper[5007]: I0218 19:58:28.995677 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.025205 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.028541 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910c61d8-825c-4f88-923b-69e3e0d20388-operator-scripts\") pod \"nova-cell1-db-create-68h4k\" (UID: \"910c61d8-825c-4f88-923b-69e3e0d20388\") " pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.028577 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/910c61d8-825c-4f88-923b-69e3e0d20388-kube-api-access-t5sjj\") pod \"nova-cell1-db-create-68h4k\" (UID: \"910c61d8-825c-4f88-923b-69e3e0d20388\") " pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.028930 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14da8257-18f4-47ad-bf34-64c287afcd9c-operator-scripts\") pod \"nova-cell0-1941-account-create-update-wdgcl\" (UID: \"14da8257-18f4-47ad-bf34-64c287afcd9c\") " pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.028991 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xr89\" (UniqueName: \"kubernetes.io/projected/14da8257-18f4-47ad-bf34-64c287afcd9c-kube-api-access-6xr89\") pod \"nova-cell0-1941-account-create-update-wdgcl\" (UID: \"14da8257-18f4-47ad-bf34-64c287afcd9c\") " pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.032495 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910c61d8-825c-4f88-923b-69e3e0d20388-operator-scripts\") pod \"nova-cell1-db-create-68h4k\" (UID: \"910c61d8-825c-4f88-923b-69e3e0d20388\") " pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.081127 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/910c61d8-825c-4f88-923b-69e3e0d20388-kube-api-access-t5sjj\") pod \"nova-cell1-db-create-68h4k\" (UID: \"910c61d8-825c-4f88-923b-69e3e0d20388\") " pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.131511 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14da8257-18f4-47ad-bf34-64c287afcd9c-operator-scripts\") pod \"nova-cell0-1941-account-create-update-wdgcl\" (UID: \"14da8257-18f4-47ad-bf34-64c287afcd9c\") " pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.131579 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xr89\" (UniqueName: \"kubernetes.io/projected/14da8257-18f4-47ad-bf34-64c287afcd9c-kube-api-access-6xr89\") pod \"nova-cell0-1941-account-create-update-wdgcl\" (UID: \"14da8257-18f4-47ad-bf34-64c287afcd9c\") " pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.132555 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14da8257-18f4-47ad-bf34-64c287afcd9c-operator-scripts\") pod \"nova-cell0-1941-account-create-update-wdgcl\" (UID: \"14da8257-18f4-47ad-bf34-64c287afcd9c\") " pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.140954 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.145636 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.149912 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c253-account-create-update-mg594"] Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.151191 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.162547 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.181555 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xr89\" (UniqueName: \"kubernetes.io/projected/14da8257-18f4-47ad-bf34-64c287afcd9c-kube-api-access-6xr89\") pod \"nova-cell0-1941-account-create-update-wdgcl\" (UID: \"14da8257-18f4-47ad-bf34-64c287afcd9c\") " pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.213574 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c253-account-create-update-mg594"] Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.235751 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb89ad8-6102-4b15-beea-f81f5cd0640c-operator-scripts\") pod \"nova-cell1-c253-account-create-update-mg594\" (UID: \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\") " pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.235848 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qg4\" (UniqueName: \"kubernetes.io/projected/bcb89ad8-6102-4b15-beea-f81f5cd0640c-kube-api-access-64qg4\") pod \"nova-cell1-c253-account-create-update-mg594\" (UID: \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\") " pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.340155 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb89ad8-6102-4b15-beea-f81f5cd0640c-operator-scripts\") pod \"nova-cell1-c253-account-create-update-mg594\" (UID: \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\") " pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.341134 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb89ad8-6102-4b15-beea-f81f5cd0640c-operator-scripts\") pod \"nova-cell1-c253-account-create-update-mg594\" (UID: \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\") " pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.341189 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qg4\" (UniqueName: \"kubernetes.io/projected/bcb89ad8-6102-4b15-beea-f81f5cd0640c-kube-api-access-64qg4\") pod \"nova-cell1-c253-account-create-update-mg594\" (UID: \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\") " pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.369785 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qg4\" (UniqueName: \"kubernetes.io/projected/bcb89ad8-6102-4b15-beea-f81f5cd0640c-kube-api-access-64qg4\") pod \"nova-cell1-c253-account-create-update-mg594\" (UID: \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\") " pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.392519 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.540736 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.616675 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6csdv"] Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.693865 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6csdv" event={"ID":"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b","Type":"ContainerStarted","Data":"0011534338db7063a948d8dd59c5ee0329d9e303271ee32feea8be8dc02b6447"} Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.694238 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.694298 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.789734 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dgmfr"] Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.803093 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e5eb-account-create-update-bxzlj"] Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.926940 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-68h4k"] Feb 18 19:58:29 crc kubenswrapper[5007]: I0218 19:58:29.964571 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1941-account-create-update-wdgcl"] Feb 18 19:58:29 crc kubenswrapper[5007]: W0218 19:58:29.973001 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14da8257_18f4_47ad_bf34_64c287afcd9c.slice/crio-a7f76cb70a588d76f253393c2fca69bf52f71516b5043e75b1850b0a69573071 WatchSource:0}: Error finding container a7f76cb70a588d76f253393c2fca69bf52f71516b5043e75b1850b0a69573071: Status 404 returned error can't find the container with id a7f76cb70a588d76f253393c2fca69bf52f71516b5043e75b1850b0a69573071 Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.072146 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c253-account-create-update-mg594"] Feb 18 19:58:30 crc kubenswrapper[5007]: W0218 19:58:30.075107 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb89ad8_6102_4b15_beea_f81f5cd0640c.slice/crio-e141ade39501ad3aadf0e3c979a5e8f69f6d038d370d2f20d3052b45c4a1b4fa WatchSource:0}: Error finding container e141ade39501ad3aadf0e3c979a5e8f69f6d038d370d2f20d3052b45c4a1b4fa: Status 404 returned error can't find the container with id e141ade39501ad3aadf0e3c979a5e8f69f6d038d370d2f20d3052b45c4a1b4fa Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.705144 5007 generic.go:334] "Generic (PLEG): container finished" podID="6d3c479f-9b46-4418-b5c3-60b63b6aaa39" containerID="83be7aa7dae8537061545300a691e3029a81cf3cdad2a6e498a64d03a02e6a1f" exitCode=0 Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.705236 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dgmfr" event={"ID":"6d3c479f-9b46-4418-b5c3-60b63b6aaa39","Type":"ContainerDied","Data":"83be7aa7dae8537061545300a691e3029a81cf3cdad2a6e498a64d03a02e6a1f"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.705545 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dgmfr" event={"ID":"6d3c479f-9b46-4418-b5c3-60b63b6aaa39","Type":"ContainerStarted","Data":"0e362fc0f2bfdd73c293ccdb0fcee486bf7d9f2be3b04ca2688c30d0e1e5368d"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.707848 5007 generic.go:334] "Generic (PLEG): container finished" podID="4f8f4417-ce9f-4152-bd07-8f2b3756bd9b" containerID="1b63fb3cd464e1937b6ac676cbfe0d39717ad20583f308875ee044c9adfcdfc7" exitCode=0 Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.707922 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6csdv" event={"ID":"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b","Type":"ContainerDied","Data":"1b63fb3cd464e1937b6ac676cbfe0d39717ad20583f308875ee044c9adfcdfc7"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.709663 5007 generic.go:334] "Generic (PLEG): container finished" podID="14da8257-18f4-47ad-bf34-64c287afcd9c" containerID="9f60f38d80d84af9b61579d55fbd7f63d1eaa74e05167c6ac3e165fcc9f0afcb" exitCode=0 Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.709732 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1941-account-create-update-wdgcl" event={"ID":"14da8257-18f4-47ad-bf34-64c287afcd9c","Type":"ContainerDied","Data":"9f60f38d80d84af9b61579d55fbd7f63d1eaa74e05167c6ac3e165fcc9f0afcb"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.709757 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1941-account-create-update-wdgcl" event={"ID":"14da8257-18f4-47ad-bf34-64c287afcd9c","Type":"ContainerStarted","Data":"a7f76cb70a588d76f253393c2fca69bf52f71516b5043e75b1850b0a69573071"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.711381 5007 generic.go:334] "Generic (PLEG): container finished" podID="910c61d8-825c-4f88-923b-69e3e0d20388" containerID="d686fb592e2993bd0ffaae443dec0b7bdfb0388125fdbf6920649836d3d7124d" exitCode=0 Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.711459 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-68h4k" event={"ID":"910c61d8-825c-4f88-923b-69e3e0d20388","Type":"ContainerDied","Data":"d686fb592e2993bd0ffaae443dec0b7bdfb0388125fdbf6920649836d3d7124d"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.711489 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-68h4k" event={"ID":"910c61d8-825c-4f88-923b-69e3e0d20388","Type":"ContainerStarted","Data":"3502fc6fcc77d22b89117ce1f90c776adb7c88fcd8a0749801ab63fcbc8c3b96"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.713143 5007 generic.go:334] "Generic (PLEG): container finished" podID="98c39b11-db3a-4cff-b4ca-bbb3a533065c" containerID="351e16fb070cdbaf4f74ed7bb989c0ca1a4950807987f33641c54d4a2484e666" exitCode=0 Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.713215 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e5eb-account-create-update-bxzlj" event={"ID":"98c39b11-db3a-4cff-b4ca-bbb3a533065c","Type":"ContainerDied","Data":"351e16fb070cdbaf4f74ed7bb989c0ca1a4950807987f33641c54d4a2484e666"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.713238 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e5eb-account-create-update-bxzlj" event={"ID":"98c39b11-db3a-4cff-b4ca-bbb3a533065c","Type":"ContainerStarted","Data":"fbbeec67edc93db810fa95d6d944b66401261fe11c0ce24c25f904021c34bbbf"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.715177 5007 generic.go:334] "Generic (PLEG): container finished" podID="bcb89ad8-6102-4b15-beea-f81f5cd0640c" containerID="291cb0d0d63a264382a202d81b86b0652c420324ead30a0155efabd4dd879465" exitCode=0 Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.715248 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c253-account-create-update-mg594" event={"ID":"bcb89ad8-6102-4b15-beea-f81f5cd0640c","Type":"ContainerDied","Data":"291cb0d0d63a264382a202d81b86b0652c420324ead30a0155efabd4dd879465"} Feb 18 19:58:30 crc kubenswrapper[5007]: I0218 19:58:30.715280 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c253-account-create-update-mg594" event={"ID":"bcb89ad8-6102-4b15-beea-f81f5cd0640c","Type":"ContainerStarted","Data":"e141ade39501ad3aadf0e3c979a5e8f69f6d038d370d2f20d3052b45c4a1b4fa"} Feb 18 19:58:31 crc kubenswrapper[5007]: I0218 19:58:31.667905 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:58:31 crc kubenswrapper[5007]: I0218 19:58:31.678941 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.216832 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.315238 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/910c61d8-825c-4f88-923b-69e3e0d20388-kube-api-access-t5sjj\") pod \"910c61d8-825c-4f88-923b-69e3e0d20388\" (UID: \"910c61d8-825c-4f88-923b-69e3e0d20388\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.315692 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910c61d8-825c-4f88-923b-69e3e0d20388-operator-scripts\") pod \"910c61d8-825c-4f88-923b-69e3e0d20388\" (UID: \"910c61d8-825c-4f88-923b-69e3e0d20388\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.316387 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910c61d8-825c-4f88-923b-69e3e0d20388-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "910c61d8-825c-4f88-923b-69e3e0d20388" (UID: "910c61d8-825c-4f88-923b-69e3e0d20388"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.320732 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/910c61d8-825c-4f88-923b-69e3e0d20388-kube-api-access-t5sjj" (OuterVolumeSpecName: "kube-api-access-t5sjj") pod "910c61d8-825c-4f88-923b-69e3e0d20388" (UID: "910c61d8-825c-4f88-923b-69e3e0d20388"). InnerVolumeSpecName "kube-api-access-t5sjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.417658 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/910c61d8-825c-4f88-923b-69e3e0d20388-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.417692 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/910c61d8-825c-4f88-923b-69e3e0d20388-kube-api-access-t5sjj\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.429368 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.437305 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.442330 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.446659 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.458689 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.519182 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62xnr\" (UniqueName: \"kubernetes.io/projected/98c39b11-db3a-4cff-b4ca-bbb3a533065c-kube-api-access-62xnr\") pod \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\" (UID: \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.519379 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c39b11-db3a-4cff-b4ca-bbb3a533065c-operator-scripts\") pod \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\" (UID: \"98c39b11-db3a-4cff-b4ca-bbb3a533065c\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.520533 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c39b11-db3a-4cff-b4ca-bbb3a533065c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98c39b11-db3a-4cff-b4ca-bbb3a533065c" (UID: "98c39b11-db3a-4cff-b4ca-bbb3a533065c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.542705 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c39b11-db3a-4cff-b4ca-bbb3a533065c-kube-api-access-62xnr" (OuterVolumeSpecName: "kube-api-access-62xnr") pod "98c39b11-db3a-4cff-b4ca-bbb3a533065c" (UID: "98c39b11-db3a-4cff-b4ca-bbb3a533065c"). InnerVolumeSpecName "kube-api-access-62xnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.622701 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.623410 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb89ad8-6102-4b15-beea-f81f5cd0640c-operator-scripts\") pod \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\" (UID: \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.623462 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14da8257-18f4-47ad-bf34-64c287afcd9c-operator-scripts\") pod \"14da8257-18f4-47ad-bf34-64c287afcd9c\" (UID: \"14da8257-18f4-47ad-bf34-64c287afcd9c\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.623497 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vkx8\" (UniqueName: \"kubernetes.io/projected/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-kube-api-access-6vkx8\") pod \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\" (UID: \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.623555 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6cv8\" (UniqueName: \"kubernetes.io/projected/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-kube-api-access-n6cv8\") pod \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\" (UID: \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.623597 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-operator-scripts\") pod \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\" (UID: \"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.623739 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-operator-scripts\") pod \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\" (UID: \"6d3c479f-9b46-4418-b5c3-60b63b6aaa39\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.623769 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64qg4\" (UniqueName: \"kubernetes.io/projected/bcb89ad8-6102-4b15-beea-f81f5cd0640c-kube-api-access-64qg4\") pod \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\" (UID: \"bcb89ad8-6102-4b15-beea-f81f5cd0640c\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.623813 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xr89\" (UniqueName: \"kubernetes.io/projected/14da8257-18f4-47ad-bf34-64c287afcd9c-kube-api-access-6xr89\") pod \"14da8257-18f4-47ad-bf34-64c287afcd9c\" (UID: \"14da8257-18f4-47ad-bf34-64c287afcd9c\") " Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.624182 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98c39b11-db3a-4cff-b4ca-bbb3a533065c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.624198 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62xnr\" (UniqueName: \"kubernetes.io/projected/98c39b11-db3a-4cff-b4ca-bbb3a533065c-kube-api-access-62xnr\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.626636 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d3c479f-9b46-4418-b5c3-60b63b6aaa39" (UID: "6d3c479f-9b46-4418-b5c3-60b63b6aaa39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.626897 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f8f4417-ce9f-4152-bd07-8f2b3756bd9b" (UID: "4f8f4417-ce9f-4152-bd07-8f2b3756bd9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.631153 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14da8257-18f4-47ad-bf34-64c287afcd9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14da8257-18f4-47ad-bf34-64c287afcd9c" (UID: "14da8257-18f4-47ad-bf34-64c287afcd9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.631537 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb89ad8-6102-4b15-beea-f81f5cd0640c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcb89ad8-6102-4b15-beea-f81f5cd0640c" (UID: "bcb89ad8-6102-4b15-beea-f81f5cd0640c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.635379 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14da8257-18f4-47ad-bf34-64c287afcd9c-kube-api-access-6xr89" (OuterVolumeSpecName: "kube-api-access-6xr89") pod "14da8257-18f4-47ad-bf34-64c287afcd9c" (UID: "14da8257-18f4-47ad-bf34-64c287afcd9c"). InnerVolumeSpecName "kube-api-access-6xr89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.635656 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-kube-api-access-n6cv8" (OuterVolumeSpecName: "kube-api-access-n6cv8") pod "6d3c479f-9b46-4418-b5c3-60b63b6aaa39" (UID: "6d3c479f-9b46-4418-b5c3-60b63b6aaa39"). InnerVolumeSpecName "kube-api-access-n6cv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.637645 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb89ad8-6102-4b15-beea-f81f5cd0640c-kube-api-access-64qg4" (OuterVolumeSpecName: "kube-api-access-64qg4") pod "bcb89ad8-6102-4b15-beea-f81f5cd0640c" (UID: "bcb89ad8-6102-4b15-beea-f81f5cd0640c"). InnerVolumeSpecName "kube-api-access-64qg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.640808 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-kube-api-access-6vkx8" (OuterVolumeSpecName: "kube-api-access-6vkx8") pod "4f8f4417-ce9f-4152-bd07-8f2b3756bd9b" (UID: "4f8f4417-ce9f-4152-bd07-8f2b3756bd9b"). InnerVolumeSpecName "kube-api-access-6vkx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.728550 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.728614 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64qg4\" (UniqueName: \"kubernetes.io/projected/bcb89ad8-6102-4b15-beea-f81f5cd0640c-kube-api-access-64qg4\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.728629 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xr89\" (UniqueName: \"kubernetes.io/projected/14da8257-18f4-47ad-bf34-64c287afcd9c-kube-api-access-6xr89\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.728643 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb89ad8-6102-4b15-beea-f81f5cd0640c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.728654 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14da8257-18f4-47ad-bf34-64c287afcd9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.728665 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vkx8\" (UniqueName: \"kubernetes.io/projected/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-kube-api-access-6vkx8\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.728675 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6cv8\" (UniqueName: \"kubernetes.io/projected/6d3c479f-9b46-4418-b5c3-60b63b6aaa39-kube-api-access-n6cv8\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.728685 5007 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.734874 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6csdv" event={"ID":"4f8f4417-ce9f-4152-bd07-8f2b3756bd9b","Type":"ContainerDied","Data":"0011534338db7063a948d8dd59c5ee0329d9e303271ee32feea8be8dc02b6447"} Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.734919 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0011534338db7063a948d8dd59c5ee0329d9e303271ee32feea8be8dc02b6447" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.734977 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6csdv" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.737759 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dgmfr" event={"ID":"6d3c479f-9b46-4418-b5c3-60b63b6aaa39","Type":"ContainerDied","Data":"0e362fc0f2bfdd73c293ccdb0fcee486bf7d9f2be3b04ca2688c30d0e1e5368d"} Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.737807 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e362fc0f2bfdd73c293ccdb0fcee486bf7d9f2be3b04ca2688c30d0e1e5368d" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.737876 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dgmfr" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.746688 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1941-account-create-update-wdgcl" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.746859 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1941-account-create-update-wdgcl" event={"ID":"14da8257-18f4-47ad-bf34-64c287afcd9c","Type":"ContainerDied","Data":"a7f76cb70a588d76f253393c2fca69bf52f71516b5043e75b1850b0a69573071"} Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.746922 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7f76cb70a588d76f253393c2fca69bf52f71516b5043e75b1850b0a69573071" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.757355 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-68h4k" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.757363 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-68h4k" event={"ID":"910c61d8-825c-4f88-923b-69e3e0d20388","Type":"ContainerDied","Data":"3502fc6fcc77d22b89117ce1f90c776adb7c88fcd8a0749801ab63fcbc8c3b96"} Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.757404 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3502fc6fcc77d22b89117ce1f90c776adb7c88fcd8a0749801ab63fcbc8c3b96" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.763144 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e5eb-account-create-update-bxzlj" event={"ID":"98c39b11-db3a-4cff-b4ca-bbb3a533065c","Type":"ContainerDied","Data":"fbbeec67edc93db810fa95d6d944b66401261fe11c0ce24c25f904021c34bbbf"} Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.763180 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbbeec67edc93db810fa95d6d944b66401261fe11c0ce24c25f904021c34bbbf" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.763238 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e5eb-account-create-update-bxzlj" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.798276 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c253-account-create-update-mg594" Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.799606 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c253-account-create-update-mg594" event={"ID":"bcb89ad8-6102-4b15-beea-f81f5cd0640c","Type":"ContainerDied","Data":"e141ade39501ad3aadf0e3c979a5e8f69f6d038d370d2f20d3052b45c4a1b4fa"} Feb 18 19:58:32 crc kubenswrapper[5007]: I0218 19:58:32.799696 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e141ade39501ad3aadf0e3c979a5e8f69f6d038d370d2f20d3052b45c4a1b4fa" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.235746 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7kccx"] Feb 18 19:58:34 crc kubenswrapper[5007]: E0218 19:58:34.236164 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb89ad8-6102-4b15-beea-f81f5cd0640c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236182 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb89ad8-6102-4b15-beea-f81f5cd0640c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: E0218 19:58:34.236201 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14da8257-18f4-47ad-bf34-64c287afcd9c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236209 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="14da8257-18f4-47ad-bf34-64c287afcd9c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: E0218 19:58:34.236222 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3c479f-9b46-4418-b5c3-60b63b6aaa39" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236232 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3c479f-9b46-4418-b5c3-60b63b6aaa39" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: E0218 19:58:34.236273 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c39b11-db3a-4cff-b4ca-bbb3a533065c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236281 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c39b11-db3a-4cff-b4ca-bbb3a533065c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: E0218 19:58:34.236296 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910c61d8-825c-4f88-923b-69e3e0d20388" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236304 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="910c61d8-825c-4f88-923b-69e3e0d20388" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: E0218 19:58:34.236313 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f8f4417-ce9f-4152-bd07-8f2b3756bd9b" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236320 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f8f4417-ce9f-4152-bd07-8f2b3756bd9b" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236554 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="910c61d8-825c-4f88-923b-69e3e0d20388" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236574 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3c479f-9b46-4418-b5c3-60b63b6aaa39" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236597 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c39b11-db3a-4cff-b4ca-bbb3a533065c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236614 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f8f4417-ce9f-4152-bd07-8f2b3756bd9b" containerName="mariadb-database-create" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236631 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb89ad8-6102-4b15-beea-f81f5cd0640c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.236642 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="14da8257-18f4-47ad-bf34-64c287afcd9c" containerName="mariadb-account-create-update" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.237403 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.287183 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.287498 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.287919 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-79l24" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.288573 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-scripts\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.288686 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqvn\" (UniqueName: \"kubernetes.io/projected/6839f27f-6779-4e91-80f8-669c7f4583e2-kube-api-access-zkqvn\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.288883 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.288997 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-config-data\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.293721 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7kccx"] Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.391232 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqvn\" (UniqueName: \"kubernetes.io/projected/6839f27f-6779-4e91-80f8-669c7f4583e2-kube-api-access-zkqvn\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.391595 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.391638 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-config-data\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.391698 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-scripts\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.397468 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.399832 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-config-data\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.401463 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-scripts\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.416911 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqvn\" (UniqueName: \"kubernetes.io/projected/6839f27f-6779-4e91-80f8-669c7f4583e2-kube-api-access-zkqvn\") pod \"nova-cell0-conductor-db-sync-7kccx\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:34 crc kubenswrapper[5007]: I0218 19:58:34.602970 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:58:35 crc kubenswrapper[5007]: W0218 19:58:35.090876 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6839f27f_6779_4e91_80f8_669c7f4583e2.slice/crio-54991b7c1a3a0332dfab253e1601c81c58343c2ed490a71771c5be08eac1a16f WatchSource:0}: Error finding container 54991b7c1a3a0332dfab253e1601c81c58343c2ed490a71771c5be08eac1a16f: Status 404 returned error can't find the container with id 54991b7c1a3a0332dfab253e1601c81c58343c2ed490a71771c5be08eac1a16f Feb 18 19:58:35 crc kubenswrapper[5007]: I0218 19:58:35.092864 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7kccx"] Feb 18 19:58:35 crc kubenswrapper[5007]: I0218 19:58:35.843733 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7kccx" event={"ID":"6839f27f-6779-4e91-80f8-669c7f4583e2","Type":"ContainerStarted","Data":"54991b7c1a3a0332dfab253e1601c81c58343c2ed490a71771c5be08eac1a16f"} Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.730825 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.766888 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-run-httpd\") pod \"ed11bf22-9b65-49f2-9e06-ec749324d752\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.766978 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxk9m\" (UniqueName: \"kubernetes.io/projected/ed11bf22-9b65-49f2-9e06-ec749324d752-kube-api-access-cxk9m\") pod \"ed11bf22-9b65-49f2-9e06-ec749324d752\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.767053 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-sg-core-conf-yaml\") pod \"ed11bf22-9b65-49f2-9e06-ec749324d752\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.767104 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-log-httpd\") pod \"ed11bf22-9b65-49f2-9e06-ec749324d752\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.767145 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-config-data\") pod \"ed11bf22-9b65-49f2-9e06-ec749324d752\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.767165 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-scripts\") pod \"ed11bf22-9b65-49f2-9e06-ec749324d752\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.767220 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-combined-ca-bundle\") pod \"ed11bf22-9b65-49f2-9e06-ec749324d752\" (UID: \"ed11bf22-9b65-49f2-9e06-ec749324d752\") " Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.768373 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed11bf22-9b65-49f2-9e06-ec749324d752" (UID: "ed11bf22-9b65-49f2-9e06-ec749324d752"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.770150 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed11bf22-9b65-49f2-9e06-ec749324d752" (UID: "ed11bf22-9b65-49f2-9e06-ec749324d752"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.806045 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed11bf22-9b65-49f2-9e06-ec749324d752-kube-api-access-cxk9m" (OuterVolumeSpecName: "kube-api-access-cxk9m") pod "ed11bf22-9b65-49f2-9e06-ec749324d752" (UID: "ed11bf22-9b65-49f2-9e06-ec749324d752"). InnerVolumeSpecName "kube-api-access-cxk9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.807875 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-scripts" (OuterVolumeSpecName: "scripts") pod "ed11bf22-9b65-49f2-9e06-ec749324d752" (UID: "ed11bf22-9b65-49f2-9e06-ec749324d752"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.818595 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed11bf22-9b65-49f2-9e06-ec749324d752" (UID: "ed11bf22-9b65-49f2-9e06-ec749324d752"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.862022 5007 generic.go:334] "Generic (PLEG): container finished" podID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerID="871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890" exitCode=137 Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.862072 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerDied","Data":"871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890"} Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.862108 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed11bf22-9b65-49f2-9e06-ec749324d752","Type":"ContainerDied","Data":"2bd28b7151937cd8e9f17e284048890757e44b7964454e81437484799389c8f8"} Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.862131 5007 scope.go:117] "RemoveContainer" containerID="871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.862293 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.870102 5007 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.870130 5007 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.870158 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.870170 5007 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed11bf22-9b65-49f2-9e06-ec749324d752-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.870184 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxk9m\" (UniqueName: \"kubernetes.io/projected/ed11bf22-9b65-49f2-9e06-ec749324d752-kube-api-access-cxk9m\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.875325 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed11bf22-9b65-49f2-9e06-ec749324d752" (UID: "ed11bf22-9b65-49f2-9e06-ec749324d752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.886971 5007 scope.go:117] "RemoveContainer" containerID="50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.889446 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-config-data" (OuterVolumeSpecName: "config-data") pod "ed11bf22-9b65-49f2-9e06-ec749324d752" (UID: "ed11bf22-9b65-49f2-9e06-ec749324d752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.916694 5007 scope.go:117] "RemoveContainer" containerID="70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.940379 5007 scope.go:117] "RemoveContainer" containerID="673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.958693 5007 scope.go:117] "RemoveContainer" containerID="871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890" Feb 18 19:58:37 crc kubenswrapper[5007]: E0218 19:58:37.959091 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890\": container with ID starting with 871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890 not found: ID does not exist" containerID="871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.959123 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890"} err="failed to get container status \"871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890\": rpc error: code = NotFound desc = could not find container \"871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890\": container with ID starting with 871bebdc3bbc75d01e42f0e7d98417016b01f9de74f568d235cdf68dd1546890 not found: ID does not exist" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.959145 5007 scope.go:117] "RemoveContainer" containerID="50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63" Feb 18 19:58:37 crc kubenswrapper[5007]: E0218 19:58:37.959394 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63\": container with ID starting with 50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63 not found: ID does not exist" containerID="50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.959418 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63"} err="failed to get container status \"50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63\": rpc error: code = NotFound desc = could not find container \"50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63\": container with ID starting with 50b2825c26db80ac901a86cc119b8da919186a66b95278b975620a290868ec63 not found: ID does not exist" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.959445 5007 scope.go:117] "RemoveContainer" containerID="70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43" Feb 18 19:58:37 crc kubenswrapper[5007]: E0218 19:58:37.959657 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43\": container with ID starting with 70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43 not found: ID does not exist" containerID="70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.959677 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43"} err="failed to get container status \"70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43\": rpc error: code = NotFound desc = could not find container \"70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43\": container with ID starting with 70f1ce4cb1b79966b9647e86b58963293d65b970b78d82a85fd7b0a17b3eab43 not found: ID does not exist" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.959688 5007 scope.go:117] "RemoveContainer" containerID="673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967" Feb 18 19:58:37 crc kubenswrapper[5007]: E0218 19:58:37.959866 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967\": container with ID starting with 673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967 not found: ID does not exist" containerID="673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.959890 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967"} err="failed to get container status \"673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967\": rpc error: code = NotFound desc = could not find container \"673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967\": container with ID starting with 673be6382c48ac37ae95d28f7d0a7fc07c1cc1eb0353f62e5ac9dd6bc5437967 not found: ID does not exist" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.972574 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:37 crc kubenswrapper[5007]: I0218 19:58:37.972610 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed11bf22-9b65-49f2-9e06-ec749324d752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.203197 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.218307 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.252376 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:38 crc kubenswrapper[5007]: E0218 19:58:38.252799 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="ceilometer-notification-agent" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.252816 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="ceilometer-notification-agent" Feb 18 19:58:38 crc kubenswrapper[5007]: E0218 19:58:38.252835 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="proxy-httpd" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.252841 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="proxy-httpd" Feb 18 19:58:38 crc kubenswrapper[5007]: E0218 19:58:38.252867 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="sg-core" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.252874 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="sg-core" Feb 18 19:58:38 crc kubenswrapper[5007]: E0218 19:58:38.252882 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="ceilometer-central-agent" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.252888 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="ceilometer-central-agent" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.253166 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="sg-core" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.253186 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="ceilometer-notification-agent" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.253197 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="proxy-httpd" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.253207 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" containerName="ceilometer-central-agent" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.254960 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.258557 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.259149 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.261065 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.278809 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-scripts\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.278856 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.278900 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-run-httpd\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.278932 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-config-data\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.278962 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-log-httpd\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.278987 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmbm\" (UniqueName: \"kubernetes.io/projected/b9152855-158a-4ba6-b099-e65761296526-kube-api-access-dcmbm\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.279017 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.383153 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.383223 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-run-httpd\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.383255 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-config-data\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.383284 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-log-httpd\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.383326 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmbm\" (UniqueName: \"kubernetes.io/projected/b9152855-158a-4ba6-b099-e65761296526-kube-api-access-dcmbm\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.383362 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.383487 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-scripts\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.385145 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-log-httpd\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.385370 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-run-httpd\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.392042 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.410685 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.413577 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-config-data\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.419143 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-scripts\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.426778 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmbm\" (UniqueName: \"kubernetes.io/projected/b9152855-158a-4ba6-b099-e65761296526-kube-api-access-dcmbm\") pod \"ceilometer-0\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " pod="openstack/ceilometer-0" Feb 18 19:58:38 crc kubenswrapper[5007]: I0218 19:58:38.576913 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:58:39 crc kubenswrapper[5007]: I0218 19:58:39.920392 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed11bf22-9b65-49f2-9e06-ec749324d752" path="/var/lib/kubelet/pods/ed11bf22-9b65-49f2-9e06-ec749324d752/volumes" Feb 18 19:58:44 crc kubenswrapper[5007]: I0218 19:58:44.594166 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:58:44 crc kubenswrapper[5007]: W0218 19:58:44.594669 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9152855_158a_4ba6_b099_e65761296526.slice/crio-77520111db3378f7e2a763a9302b2a7bddd19c2c635dc65e9cdd2d4e036c5bf4 WatchSource:0}: Error finding container 77520111db3378f7e2a763a9302b2a7bddd19c2c635dc65e9cdd2d4e036c5bf4: Status 404 returned error can't find the container with id 77520111db3378f7e2a763a9302b2a7bddd19c2c635dc65e9cdd2d4e036c5bf4 Feb 18 19:58:44 crc kubenswrapper[5007]: I0218 19:58:44.597315 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:58:44 crc kubenswrapper[5007]: I0218 19:58:44.973633 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7kccx" event={"ID":"6839f27f-6779-4e91-80f8-669c7f4583e2","Type":"ContainerStarted","Data":"b851d4cee6f5b4b62cd371a257eeb53ce2753849df0019589fcfd402020c572b"} Feb 18 19:58:44 crc kubenswrapper[5007]: I0218 19:58:44.976187 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerStarted","Data":"08a584c1a7bc88c2036ba25b7fc4ec20f63d78b0cd4b171c60b6a4efbc0e2287"} Feb 18 19:58:44 crc kubenswrapper[5007]: I0218 19:58:44.976238 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerStarted","Data":"77520111db3378f7e2a763a9302b2a7bddd19c2c635dc65e9cdd2d4e036c5bf4"} Feb 18 19:58:45 crc kubenswrapper[5007]: I0218 19:58:45.000527 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7kccx" podStartSLOduration=1.6913716490000001 podStartE2EDuration="11.000503949s" podCreationTimestamp="2026-02-18 19:58:34 +0000 UTC" firstStartedPulling="2026-02-18 19:58:35.093138584 +0000 UTC m=+1199.875258118" lastFinishedPulling="2026-02-18 19:58:44.402270894 +0000 UTC m=+1209.184390418" observedRunningTime="2026-02-18 19:58:44.990899844 +0000 UTC m=+1209.773019388" watchObservedRunningTime="2026-02-18 19:58:45.000503949 +0000 UTC m=+1209.782623493" Feb 18 19:58:45 crc kubenswrapper[5007]: I0218 19:58:45.989674 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerStarted","Data":"2908c501588e4898142296d6c0c210cca045514597ae82115594c1094050a236"} Feb 18 19:58:45 crc kubenswrapper[5007]: I0218 19:58:45.989948 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerStarted","Data":"f6b494360468dcff1baa8032d04a67cc90312617edb79f2b866d2b9ab30d7a1b"} Feb 18 19:58:48 crc kubenswrapper[5007]: I0218 19:58:48.011419 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerStarted","Data":"b1c8e2525184bb7c8b394b62980fcd89b08602b1ee48f88db7080e4fd6a19103"} Feb 18 19:58:48 crc kubenswrapper[5007]: I0218 19:58:48.011779 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:58:48 crc kubenswrapper[5007]: I0218 19:58:48.049805 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.243758356 podStartE2EDuration="10.049788348s" podCreationTimestamp="2026-02-18 19:58:38 +0000 UTC" firstStartedPulling="2026-02-18 19:58:44.597124275 +0000 UTC m=+1209.379243799" lastFinishedPulling="2026-02-18 19:58:47.403154257 +0000 UTC m=+1212.185273791" observedRunningTime="2026-02-18 19:58:48.041678996 +0000 UTC m=+1212.823798520" watchObservedRunningTime="2026-02-18 19:58:48.049788348 +0000 UTC m=+1212.831907872" Feb 18 19:58:59 crc kubenswrapper[5007]: I0218 19:58:59.140224 5007 generic.go:334] "Generic (PLEG): container finished" podID="6839f27f-6779-4e91-80f8-669c7f4583e2" containerID="b851d4cee6f5b4b62cd371a257eeb53ce2753849df0019589fcfd402020c572b" exitCode=0 Feb 18 19:58:59 crc kubenswrapper[5007]: I0218 19:58:59.141059 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7kccx" event={"ID":"6839f27f-6779-4e91-80f8-669c7f4583e2","Type":"ContainerDied","Data":"b851d4cee6f5b4b62cd371a257eeb53ce2753849df0019589fcfd402020c572b"} Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.589927 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.697061 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkqvn\" (UniqueName: \"kubernetes.io/projected/6839f27f-6779-4e91-80f8-669c7f4583e2-kube-api-access-zkqvn\") pod \"6839f27f-6779-4e91-80f8-669c7f4583e2\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.697286 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-config-data\") pod \"6839f27f-6779-4e91-80f8-669c7f4583e2\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.697343 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-combined-ca-bundle\") pod \"6839f27f-6779-4e91-80f8-669c7f4583e2\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.697407 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-scripts\") pod \"6839f27f-6779-4e91-80f8-669c7f4583e2\" (UID: \"6839f27f-6779-4e91-80f8-669c7f4583e2\") " Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.706107 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6839f27f-6779-4e91-80f8-669c7f4583e2-kube-api-access-zkqvn" (OuterVolumeSpecName: "kube-api-access-zkqvn") pod "6839f27f-6779-4e91-80f8-669c7f4583e2" (UID: "6839f27f-6779-4e91-80f8-669c7f4583e2"). InnerVolumeSpecName "kube-api-access-zkqvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.706813 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-scripts" (OuterVolumeSpecName: "scripts") pod "6839f27f-6779-4e91-80f8-669c7f4583e2" (UID: "6839f27f-6779-4e91-80f8-669c7f4583e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.736310 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6839f27f-6779-4e91-80f8-669c7f4583e2" (UID: "6839f27f-6779-4e91-80f8-669c7f4583e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.755788 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-config-data" (OuterVolumeSpecName: "config-data") pod "6839f27f-6779-4e91-80f8-669c7f4583e2" (UID: "6839f27f-6779-4e91-80f8-669c7f4583e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.799560 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.799603 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.799623 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839f27f-6779-4e91-80f8-669c7f4583e2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:00 crc kubenswrapper[5007]: I0218 19:59:00.799636 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkqvn\" (UniqueName: \"kubernetes.io/projected/6839f27f-6779-4e91-80f8-669c7f4583e2-kube-api-access-zkqvn\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.170803 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7kccx" event={"ID":"6839f27f-6779-4e91-80f8-669c7f4583e2","Type":"ContainerDied","Data":"54991b7c1a3a0332dfab253e1601c81c58343c2ed490a71771c5be08eac1a16f"} Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.170853 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54991b7c1a3a0332dfab253e1601c81c58343c2ed490a71771c5be08eac1a16f" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.170877 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7kccx" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.322114 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:59:01 crc kubenswrapper[5007]: E0218 19:59:01.322490 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6839f27f-6779-4e91-80f8-669c7f4583e2" containerName="nova-cell0-conductor-db-sync" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.322506 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6839f27f-6779-4e91-80f8-669c7f4583e2" containerName="nova-cell0-conductor-db-sync" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.322703 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="6839f27f-6779-4e91-80f8-669c7f4583e2" containerName="nova-cell0-conductor-db-sync" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.323368 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.325410 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.325841 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-79l24" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.341369 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.413822 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.413895 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.414207 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2xg\" (UniqueName: \"kubernetes.io/projected/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-kube-api-access-nn2xg\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.516797 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.517172 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.517329 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2xg\" (UniqueName: \"kubernetes.io/projected/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-kube-api-access-nn2xg\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.521770 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.530472 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.550617 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2xg\" (UniqueName: \"kubernetes.io/projected/e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a-kube-api-access-nn2xg\") pod \"nova-cell0-conductor-0\" (UID: \"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:01 crc kubenswrapper[5007]: I0218 19:59:01.639622 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:02 crc kubenswrapper[5007]: I0218 19:59:02.166385 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:59:02 crc kubenswrapper[5007]: I0218 19:59:02.188396 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a","Type":"ContainerStarted","Data":"4618bc6c1b133544f502bf9604a06e59b8a91f0ca3be2faab38f2f881cd7b245"} Feb 18 19:59:03 crc kubenswrapper[5007]: I0218 19:59:03.201380 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e04bf2c1-33ef-4065-a2ad-dc2cb3ea593a","Type":"ContainerStarted","Data":"fb64db9ceac76df4da7781441bfa10dec2ed38a532f9957ce8a05657d164080c"} Feb 18 19:59:03 crc kubenswrapper[5007]: I0218 19:59:03.201775 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:08 crc kubenswrapper[5007]: I0218 19:59:08.586756 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:59:08 crc kubenswrapper[5007]: I0218 19:59:08.641710 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=7.641686365 podStartE2EDuration="7.641686365s" podCreationTimestamp="2026-02-18 19:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:03.228769094 +0000 UTC m=+1228.010888618" watchObservedRunningTime="2026-02-18 19:59:08.641686365 +0000 UTC m=+1233.423805899" Feb 18 19:59:11 crc kubenswrapper[5007]: I0218 19:59:11.684188 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.243873 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4zz4g"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.246196 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.256001 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.256080 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.273852 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zz4g"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.358805 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.358874 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcwt\" (UniqueName: \"kubernetes.io/projected/1c44e9b1-0ed5-466d-9602-e53210c2c34a-kube-api-access-kjcwt\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.358920 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-scripts\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.358988 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-config-data\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.461191 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.461277 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcwt\" (UniqueName: \"kubernetes.io/projected/1c44e9b1-0ed5-466d-9602-e53210c2c34a-kube-api-access-kjcwt\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.461314 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-scripts\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.461395 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-config-data\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.472184 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-config-data\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.472252 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.472951 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-scripts\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.502092 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcwt\" (UniqueName: \"kubernetes.io/projected/1c44e9b1-0ed5-466d-9602-e53210c2c34a-kube-api-access-kjcwt\") pod \"nova-cell0-cell-mapping-4zz4g\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.538652 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.539985 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.551106 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.561448 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.582733 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.582916 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9e731197-939b-4501-8baf-54690037a9ca" containerName="kube-state-metrics" containerID="cri-o://4aeac6dee38be1af8a76b7cf61c11e515cf29268bc0d7a311714eb2b24e973d8" gracePeriod=30 Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.613488 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.665312 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.665353 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxhw\" (UniqueName: \"kubernetes.io/projected/998c4c26-6713-456e-af55-7fc1ad00be06-kube-api-access-tgxhw\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.665473 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.737441 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.739082 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.747824 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.768731 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.768821 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.768861 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgxhw\" (UniqueName: \"kubernetes.io/projected/998c4c26-6713-456e-af55-7fc1ad00be06-kube-api-access-tgxhw\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.768931 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-logs\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.768968 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-config-data\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.768992 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpfw\" (UniqueName: \"kubernetes.io/projected/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-kube-api-access-2kpfw\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.769058 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.769136 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.787855 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.794043 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.807499 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.809047 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.825027 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgxhw\" (UniqueName: \"kubernetes.io/projected/998c4c26-6713-456e-af55-7fc1ad00be06-kube-api-access-tgxhw\") pod \"nova-cell1-novncproxy-0\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.830945 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.860497 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.861763 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.870724 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kpfw\" (UniqueName: \"kubernetes.io/projected/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-kube-api-access-2kpfw\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.870983 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.871175 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh4pt\" (UniqueName: \"kubernetes.io/projected/f5218352-e732-4b65-b885-8fb14b5b6e35-kube-api-access-sh4pt\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.871280 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.871367 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5218352-e732-4b65-b885-8fb14b5b6e35-logs\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.871531 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-config-data\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.871666 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-logs\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.871776 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-config-data\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.873297 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-logs\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.884966 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-config-data\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.888239 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.892322 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.897622 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.908589 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.926674 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kpfw\" (UniqueName: \"kubernetes.io/projected/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-kube-api-access-2kpfw\") pod \"nova-metadata-0\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " pod="openstack/nova-metadata-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.973006 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.973049 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.973117 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-config-data\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.973144 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh4pt\" (UniqueName: \"kubernetes.io/projected/f5218352-e732-4b65-b885-8fb14b5b6e35-kube-api-access-sh4pt\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.973183 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5218352-e732-4b65-b885-8fb14b5b6e35-logs\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.973201 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-config-data\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.973217 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4rp7\" (UniqueName: \"kubernetes.io/projected/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-kube-api-access-l4rp7\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.976702 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5218352-e732-4b65-b885-8fb14b5b6e35-logs\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.978028 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.983271 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-config-data\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:12 crc kubenswrapper[5007]: I0218 19:59:12.989536 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.009172 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh4pt\" (UniqueName: \"kubernetes.io/projected/f5218352-e732-4b65-b885-8fb14b5b6e35-kube-api-access-sh4pt\") pod \"nova-api-0\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " pod="openstack/nova-api-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.054252 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.062044 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074525 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074601 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-config-data\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074640 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-config\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074673 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074694 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4rp7\" (UniqueName: \"kubernetes.io/projected/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-kube-api-access-l4rp7\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074712 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-svc\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074746 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hd4\" (UniqueName: \"kubernetes.io/projected/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-kube-api-access-42hd4\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074806 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.074832 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.084619 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-config-data\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.089621 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.106957 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.120063 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4rp7\" (UniqueName: \"kubernetes.io/projected/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-kube-api-access-l4rp7\") pod \"nova-scheduler-0\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.175243 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.176362 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-config\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.176469 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.176525 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-svc\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.176605 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42hd4\" (UniqueName: \"kubernetes.io/projected/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-kube-api-access-42hd4\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.176797 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.176882 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.177342 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-config\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.177417 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.177931 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-svc\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.178366 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.182637 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.199968 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42hd4\" (UniqueName: \"kubernetes.io/projected/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-kube-api-access-42hd4\") pod \"dnsmasq-dns-7fb5c7bcd5-5c8fs\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.201295 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.219572 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:59:13 crc kubenswrapper[5007]: W0218 19:59:13.318617 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c44e9b1_0ed5_466d_9602_e53210c2c34a.slice/crio-c16bc443d027a433e818186df93252c817eefab7d6601aa5c3716bb37144d868 WatchSource:0}: Error finding container c16bc443d027a433e818186df93252c817eefab7d6601aa5c3716bb37144d868: Status 404 returned error can't find the container with id c16bc443d027a433e818186df93252c817eefab7d6601aa5c3716bb37144d868 Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.324089 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zz4g"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.369527 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zz4g" event={"ID":"1c44e9b1-0ed5-466d-9602-e53210c2c34a","Type":"ContainerStarted","Data":"c16bc443d027a433e818186df93252c817eefab7d6601aa5c3716bb37144d868"} Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.381713 5007 generic.go:334] "Generic (PLEG): container finished" podID="9e731197-939b-4501-8baf-54690037a9ca" containerID="4aeac6dee38be1af8a76b7cf61c11e515cf29268bc0d7a311714eb2b24e973d8" exitCode=2 Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.381754 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9e731197-939b-4501-8baf-54690037a9ca","Type":"ContainerDied","Data":"4aeac6dee38be1af8a76b7cf61c11e515cf29268bc0d7a311714eb2b24e973d8"} Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.389012 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.480858 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.808936 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbb2l"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.810286 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.811981 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.812529 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.832824 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbb2l"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.868302 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.980269 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.993508 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-scripts\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.993596 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.993635 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-config-data\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:13 crc kubenswrapper[5007]: I0218 19:59:13.993743 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4ff\" (UniqueName: \"kubernetes.io/projected/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-kube-api-access-5r4ff\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.005621 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.096282 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-scripts\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.097471 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.097605 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-config-data\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.098226 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs"] Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.101729 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-scripts\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.102396 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4ff\" (UniqueName: \"kubernetes.io/projected/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-kube-api-access-5r4ff\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.102915 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.110398 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-config-data\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.114521 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.125469 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4ff\" (UniqueName: \"kubernetes.io/projected/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-kube-api-access-5r4ff\") pod \"nova-cell1-conductor-db-sync-xbb2l\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.135223 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.204045 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttp8t\" (UniqueName: \"kubernetes.io/projected/9e731197-939b-4501-8baf-54690037a9ca-kube-api-access-ttp8t\") pod \"9e731197-939b-4501-8baf-54690037a9ca\" (UID: \"9e731197-939b-4501-8baf-54690037a9ca\") " Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.208895 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e731197-939b-4501-8baf-54690037a9ca-kube-api-access-ttp8t" (OuterVolumeSpecName: "kube-api-access-ttp8t") pod "9e731197-939b-4501-8baf-54690037a9ca" (UID: "9e731197-939b-4501-8baf-54690037a9ca"). InnerVolumeSpecName "kube-api-access-ttp8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.308657 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttp8t\" (UniqueName: \"kubernetes.io/projected/9e731197-939b-4501-8baf-54690037a9ca-kube-api-access-ttp8t\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.420131 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.420297 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9e731197-939b-4501-8baf-54690037a9ca","Type":"ContainerDied","Data":"3c71e4d8c7e1465311792d3d19d898bc54141e72c7e5902e1dd5afedf607b594"} Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.420371 5007 scope.go:117] "RemoveContainer" containerID="4aeac6dee38be1af8a76b7cf61c11e515cf29268bc0d7a311714eb2b24e973d8" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.431182 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zz4g" event={"ID":"1c44e9b1-0ed5-466d-9602-e53210c2c34a","Type":"ContainerStarted","Data":"f3967271c77a886891ecd8f64ea386036c748a9e34033181547499f3b5ae415c"} Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.437628 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"998c4c26-6713-456e-af55-7fc1ad00be06","Type":"ContainerStarted","Data":"15622d6e686d3c0adda4d3ecd6f902f7db4485e3565f87f7e9c788597e01b26a"} Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.443986 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5","Type":"ContainerStarted","Data":"afd2b7e3e304d59a4d680d824ef5a7ac2115a315dbb45773019df9fcea436137"} Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.449338 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5218352-e732-4b65-b885-8fb14b5b6e35","Type":"ContainerStarted","Data":"c58fe271fd3a0ecc6d8225ff71219fb34d478c6d71ebfcfa49e80f6dfbb9f3ab"} Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.452673 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9","Type":"ContainerStarted","Data":"4e20fc605a3e3a507f2eaa9edd3c44595a70a7943e7c1769dd4325d3fdc7a8b7"} Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.455113 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" event={"ID":"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236","Type":"ContainerStarted","Data":"333b9f49d250873fb3676cd76fab9780772f547c479642cb39962b2f60db4cf9"} Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.468266 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4zz4g" podStartSLOduration=2.468247088 podStartE2EDuration="2.468247088s" podCreationTimestamp="2026-02-18 19:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:14.44521749 +0000 UTC m=+1239.227337024" watchObservedRunningTime="2026-02-18 19:59:14.468247088 +0000 UTC m=+1239.250366612" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.561206 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.580452 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.594483 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:59:14 crc kubenswrapper[5007]: E0218 19:59:14.594989 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e731197-939b-4501-8baf-54690037a9ca" containerName="kube-state-metrics" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.595009 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e731197-939b-4501-8baf-54690037a9ca" containerName="kube-state-metrics" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.595242 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e731197-939b-4501-8baf-54690037a9ca" containerName="kube-state-metrics" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.595916 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.598494 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.600772 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.601064 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.648784 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbb2l"] Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.716624 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.716756 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.716787 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq4dd\" (UniqueName: \"kubernetes.io/projected/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-api-access-wq4dd\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.716902 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.818419 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq4dd\" (UniqueName: \"kubernetes.io/projected/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-api-access-wq4dd\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.818670 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.819742 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.819831 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.825373 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.825401 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.829579 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.854558 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq4dd\" (UniqueName: \"kubernetes.io/projected/de70538a-4895-4abf-a5a1-d4a5b29d66a3-kube-api-access-wq4dd\") pod \"kube-state-metrics-0\" (UID: \"de70538a-4895-4abf-a5a1-d4a5b29d66a3\") " pod="openstack/kube-state-metrics-0" Feb 18 19:59:14 crc kubenswrapper[5007]: I0218 19:59:14.942386 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:59:15 crc kubenswrapper[5007]: I0218 19:59:15.468622 5007 generic.go:334] "Generic (PLEG): container finished" podID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" containerID="f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224" exitCode=0 Feb 18 19:59:15 crc kubenswrapper[5007]: I0218 19:59:15.468732 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" event={"ID":"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236","Type":"ContainerDied","Data":"f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224"} Feb 18 19:59:15 crc kubenswrapper[5007]: I0218 19:59:15.731782 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:15 crc kubenswrapper[5007]: I0218 19:59:15.732165 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="ceilometer-central-agent" containerID="cri-o://08a584c1a7bc88c2036ba25b7fc4ec20f63d78b0cd4b171c60b6a4efbc0e2287" gracePeriod=30 Feb 18 19:59:15 crc kubenswrapper[5007]: I0218 19:59:15.732213 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="proxy-httpd" containerID="cri-o://b1c8e2525184bb7c8b394b62980fcd89b08602b1ee48f88db7080e4fd6a19103" gracePeriod=30 Feb 18 19:59:15 crc kubenswrapper[5007]: I0218 19:59:15.732272 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="sg-core" containerID="cri-o://2908c501588e4898142296d6c0c210cca045514597ae82115594c1094050a236" gracePeriod=30 Feb 18 19:59:15 crc kubenswrapper[5007]: I0218 19:59:15.732304 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="ceilometer-notification-agent" containerID="cri-o://f6b494360468dcff1baa8032d04a67cc90312617edb79f2b866d2b9ab30d7a1b" gracePeriod=30 Feb 18 19:59:15 crc kubenswrapper[5007]: W0218 19:59:15.826135 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb07421c1_d538_4d07_92d5_19e6ffcd0b3e.slice/crio-821c6d39cf4fb73e35423595dd9f40507271d85a108ab04b28ee7b24882cf931 WatchSource:0}: Error finding container 821c6d39cf4fb73e35423595dd9f40507271d85a108ab04b28ee7b24882cf931: Status 404 returned error can't find the container with id 821c6d39cf4fb73e35423595dd9f40507271d85a108ab04b28ee7b24882cf931 Feb 18 19:59:15 crc kubenswrapper[5007]: I0218 19:59:15.922591 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e731197-939b-4501-8baf-54690037a9ca" path="/var/lib/kubelet/pods/9e731197-939b-4501-8baf-54690037a9ca/volumes" Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.241604 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.252992 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.491033 5007 generic.go:334] "Generic (PLEG): container finished" podID="b9152855-158a-4ba6-b099-e65761296526" containerID="b1c8e2525184bb7c8b394b62980fcd89b08602b1ee48f88db7080e4fd6a19103" exitCode=0 Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.491073 5007 generic.go:334] "Generic (PLEG): container finished" podID="b9152855-158a-4ba6-b099-e65761296526" containerID="2908c501588e4898142296d6c0c210cca045514597ae82115594c1094050a236" exitCode=2 Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.491084 5007 generic.go:334] "Generic (PLEG): container finished" podID="b9152855-158a-4ba6-b099-e65761296526" containerID="08a584c1a7bc88c2036ba25b7fc4ec20f63d78b0cd4b171c60b6a4efbc0e2287" exitCode=0 Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.491135 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerDied","Data":"b1c8e2525184bb7c8b394b62980fcd89b08602b1ee48f88db7080e4fd6a19103"} Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.491165 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerDied","Data":"2908c501588e4898142296d6c0c210cca045514597ae82115594c1094050a236"} Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.491181 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerDied","Data":"08a584c1a7bc88c2036ba25b7fc4ec20f63d78b0cd4b171c60b6a4efbc0e2287"} Feb 18 19:59:16 crc kubenswrapper[5007]: I0218 19:59:16.492881 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" event={"ID":"b07421c1-d538-4d07-92d5-19e6ffcd0b3e","Type":"ContainerStarted","Data":"821c6d39cf4fb73e35423595dd9f40507271d85a108ab04b28ee7b24882cf931"} Feb 18 19:59:17 crc kubenswrapper[5007]: W0218 19:59:17.316727 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde70538a_4895_4abf_a5a1_d4a5b29d66a3.slice/crio-3a8a0330fe270d42e6aede5c11ed0d5d171d94779bf475ddaec18ea1d37401a6 WatchSource:0}: Error finding container 3a8a0330fe270d42e6aede5c11ed0d5d171d94779bf475ddaec18ea1d37401a6: Status 404 returned error can't find the container with id 3a8a0330fe270d42e6aede5c11ed0d5d171d94779bf475ddaec18ea1d37401a6 Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.325024 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.546047 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.551107 5007 generic.go:334] "Generic (PLEG): container finished" podID="b9152855-158a-4ba6-b099-e65761296526" containerID="f6b494360468dcff1baa8032d04a67cc90312617edb79f2b866d2b9ab30d7a1b" exitCode=0 Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.551183 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerDied","Data":"f6b494360468dcff1baa8032d04a67cc90312617edb79f2b866d2b9ab30d7a1b"} Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.551213 5007 scope.go:117] "RemoveContainer" containerID="b1c8e2525184bb7c8b394b62980fcd89b08602b1ee48f88db7080e4fd6a19103" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.569295 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5218352-e732-4b65-b885-8fb14b5b6e35","Type":"ContainerStarted","Data":"90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120"} Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.582676 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9","Type":"ContainerStarted","Data":"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5"} Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.593861 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" event={"ID":"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236","Type":"ContainerStarted","Data":"d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be"} Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.595123 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.601555 5007 scope.go:117] "RemoveContainer" containerID="2908c501588e4898142296d6c0c210cca045514597ae82115594c1094050a236" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.616915 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"998c4c26-6713-456e-af55-7fc1ad00be06","Type":"ContainerStarted","Data":"b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f"} Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.617177 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="998c4c26-6713-456e-af55-7fc1ad00be06" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f" gracePeriod=30 Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.618153 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-config-data\") pod \"b9152855-158a-4ba6-b099-e65761296526\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.618269 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-run-httpd\") pod \"b9152855-158a-4ba6-b099-e65761296526\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.618329 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcmbm\" (UniqueName: \"kubernetes.io/projected/b9152855-158a-4ba6-b099-e65761296526-kube-api-access-dcmbm\") pod \"b9152855-158a-4ba6-b099-e65761296526\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.618493 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-scripts\") pod \"b9152855-158a-4ba6-b099-e65761296526\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.618583 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-log-httpd\") pod \"b9152855-158a-4ba6-b099-e65761296526\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.618645 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-sg-core-conf-yaml\") pod \"b9152855-158a-4ba6-b099-e65761296526\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.618668 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-combined-ca-bundle\") pod \"b9152855-158a-4ba6-b099-e65761296526\" (UID: \"b9152855-158a-4ba6-b099-e65761296526\") " Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.631864 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b9152855-158a-4ba6-b099-e65761296526" (UID: "b9152855-158a-4ba6-b099-e65761296526"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.632423 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b9152855-158a-4ba6-b099-e65761296526" (UID: "b9152855-158a-4ba6-b099-e65761296526"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.637447 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" podStartSLOduration=5.637404086 podStartE2EDuration="5.637404086s" podCreationTimestamp="2026-02-18 19:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:17.628054439 +0000 UTC m=+1242.410173963" watchObservedRunningTime="2026-02-18 19:59:17.637404086 +0000 UTC m=+1242.419523610" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.645636 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-scripts" (OuterVolumeSpecName: "scripts") pod "b9152855-158a-4ba6-b099-e65761296526" (UID: "b9152855-158a-4ba6-b099-e65761296526"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.648633 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9152855-158a-4ba6-b099-e65761296526-kube-api-access-dcmbm" (OuterVolumeSpecName: "kube-api-access-dcmbm") pod "b9152855-158a-4ba6-b099-e65761296526" (UID: "b9152855-158a-4ba6-b099-e65761296526"). InnerVolumeSpecName "kube-api-access-dcmbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.675032 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" event={"ID":"b07421c1-d538-4d07-92d5-19e6ffcd0b3e","Type":"ContainerStarted","Data":"93db4e202598458bd66cb8df69cb4c3d2b2eff78a59f7972ecfd302a3dc92921"} Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.677388 5007 scope.go:117] "RemoveContainer" containerID="f6b494360468dcff1baa8032d04a67cc90312617edb79f2b866d2b9ab30d7a1b" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.680131 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b9152855-158a-4ba6-b099-e65761296526" (UID: "b9152855-158a-4ba6-b099-e65761296526"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.699501 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de70538a-4895-4abf-a5a1-d4a5b29d66a3","Type":"ContainerStarted","Data":"3a8a0330fe270d42e6aede5c11ed0d5d171d94779bf475ddaec18ea1d37401a6"} Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.720936 5007 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.720919 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.447972895 podStartE2EDuration="5.720896642s" podCreationTimestamp="2026-02-18 19:59:12 +0000 UTC" firstStartedPulling="2026-02-18 19:59:13.518533353 +0000 UTC m=+1238.300652877" lastFinishedPulling="2026-02-18 19:59:16.79145709 +0000 UTC m=+1241.573576624" observedRunningTime="2026-02-18 19:59:17.648940441 +0000 UTC m=+1242.431059975" watchObservedRunningTime="2026-02-18 19:59:17.720896642 +0000 UTC m=+1242.503016166" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.720963 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcmbm\" (UniqueName: \"kubernetes.io/projected/b9152855-158a-4ba6-b099-e65761296526-kube-api-access-dcmbm\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.721282 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.721297 5007 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9152855-158a-4ba6-b099-e65761296526-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.721310 5007 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.734672 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5","Type":"ContainerStarted","Data":"06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb"} Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.749057 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" podStartSLOduration=4.749040085 podStartE2EDuration="4.749040085s" podCreationTimestamp="2026-02-18 19:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:17.701118844 +0000 UTC m=+1242.483238368" watchObservedRunningTime="2026-02-18 19:59:17.749040085 +0000 UTC m=+1242.531159609" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.769658 5007 scope.go:117] "RemoveContainer" containerID="08a584c1a7bc88c2036ba25b7fc4ec20f63d78b0cd4b171c60b6a4efbc0e2287" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.806034 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.094258469 podStartE2EDuration="5.806016228s" podCreationTimestamp="2026-02-18 19:59:12 +0000 UTC" firstStartedPulling="2026-02-18 19:59:14.10916252 +0000 UTC m=+1238.891282044" lastFinishedPulling="2026-02-18 19:59:16.820920289 +0000 UTC m=+1241.603039803" observedRunningTime="2026-02-18 19:59:17.759588467 +0000 UTC m=+1242.541708001" watchObservedRunningTime="2026-02-18 19:59:17.806016228 +0000 UTC m=+1242.588135752" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.903732 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-config-data" (OuterVolumeSpecName: "config-data") pod "b9152855-158a-4ba6-b099-e65761296526" (UID: "b9152855-158a-4ba6-b099-e65761296526"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.923909 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:17 crc kubenswrapper[5007]: I0218 19:59:17.939346 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9152855-158a-4ba6-b099-e65761296526" (UID: "b9152855-158a-4ba6-b099-e65761296526"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.026381 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9152855-158a-4ba6-b099-e65761296526-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.052174 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.221777 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.749844 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.749855 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9152855-158a-4ba6-b099-e65761296526","Type":"ContainerDied","Data":"77520111db3378f7e2a763a9302b2a7bddd19c2c635dc65e9cdd2d4e036c5bf4"} Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.752789 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5218352-e732-4b65-b885-8fb14b5b6e35","Type":"ContainerStarted","Data":"3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43"} Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.756450 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9","Type":"ContainerStarted","Data":"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972"} Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.756551 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerName="nova-metadata-log" containerID="cri-o://de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5" gracePeriod=30 Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.756760 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerName="nova-metadata-metadata" containerID="cri-o://02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972" gracePeriod=30 Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.760081 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de70538a-4895-4abf-a5a1-d4a5b29d66a3","Type":"ContainerStarted","Data":"e22370fda06d95016e787f7453450bf301ce54fa6818a62e74b87c18806add27"} Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.760109 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.821691 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.97726869 podStartE2EDuration="6.821672871s" podCreationTimestamp="2026-02-18 19:59:12 +0000 UTC" firstStartedPulling="2026-02-18 19:59:13.995622678 +0000 UTC m=+1238.777742202" lastFinishedPulling="2026-02-18 19:59:16.840026869 +0000 UTC m=+1241.622146383" observedRunningTime="2026-02-18 19:59:18.800101791 +0000 UTC m=+1243.582221315" watchObservedRunningTime="2026-02-18 19:59:18.821672871 +0000 UTC m=+1243.603792395" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.830283 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.870081378 podStartE2EDuration="6.830265749s" podCreationTimestamp="2026-02-18 19:59:12 +0000 UTC" firstStartedPulling="2026-02-18 19:59:13.860721648 +0000 UTC m=+1238.642841172" lastFinishedPulling="2026-02-18 19:59:16.820906019 +0000 UTC m=+1241.603025543" observedRunningTime="2026-02-18 19:59:18.822615683 +0000 UTC m=+1243.604735217" watchObservedRunningTime="2026-02-18 19:59:18.830265749 +0000 UTC m=+1243.612385273" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.861933 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.272480482 podStartE2EDuration="4.861909905s" podCreationTimestamp="2026-02-18 19:59:14 +0000 UTC" firstStartedPulling="2026-02-18 19:59:17.321262467 +0000 UTC m=+1242.103381991" lastFinishedPulling="2026-02-18 19:59:17.91069189 +0000 UTC m=+1242.692811414" observedRunningTime="2026-02-18 19:59:18.841084194 +0000 UTC m=+1243.623203718" watchObservedRunningTime="2026-02-18 19:59:18.861909905 +0000 UTC m=+1243.644029429" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.886176 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.899495 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.908925 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:18 crc kubenswrapper[5007]: E0218 19:59:18.909901 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="ceilometer-notification-agent" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.909980 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="ceilometer-notification-agent" Feb 18 19:59:18 crc kubenswrapper[5007]: E0218 19:59:18.910065 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="sg-core" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.910124 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="sg-core" Feb 18 19:59:18 crc kubenswrapper[5007]: E0218 19:59:18.910191 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="ceilometer-central-agent" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.910250 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="ceilometer-central-agent" Feb 18 19:59:18 crc kubenswrapper[5007]: E0218 19:59:18.910313 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="proxy-httpd" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.910364 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="proxy-httpd" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.910716 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="proxy-httpd" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.910796 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="ceilometer-central-agent" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.910875 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="ceilometer-notification-agent" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.910945 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9152855-158a-4ba6-b099-e65761296526" containerName="sg-core" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.917332 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.922102 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.923633 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.923838 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.927739 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.947121 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprcr\" (UniqueName: \"kubernetes.io/projected/1ba44459-02df-4479-a9bb-ad0a9864e6f6-kube-api-access-dprcr\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.947191 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-config-data\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.947766 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.947927 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.948051 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-run-httpd\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.948108 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-log-httpd\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.948178 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-scripts\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:18 crc kubenswrapper[5007]: I0218 19:59:18.948216 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050410 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-run-httpd\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050474 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-log-httpd\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050500 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-scripts\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050530 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050565 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dprcr\" (UniqueName: \"kubernetes.io/projected/1ba44459-02df-4479-a9bb-ad0a9864e6f6-kube-api-access-dprcr\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050602 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-config-data\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050715 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050739 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.050961 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-run-httpd\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.051055 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-log-httpd\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.059694 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.060492 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.060761 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-config-data\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.064170 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.115593 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-scripts\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.117166 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dprcr\" (UniqueName: \"kubernetes.io/projected/1ba44459-02df-4479-a9bb-ad0a9864e6f6-kube-api-access-dprcr\") pod \"ceilometer-0\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.256777 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.395326 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.563721 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kpfw\" (UniqueName: \"kubernetes.io/projected/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-kube-api-access-2kpfw\") pod \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.563870 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-logs\") pod \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.563949 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-combined-ca-bundle\") pod \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.564022 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-config-data\") pod \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\" (UID: \"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9\") " Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.564403 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-logs" (OuterVolumeSpecName: "logs") pod "234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" (UID: "234ed292-a2b6-4856-86ab-3c5d7c6cd5f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.565027 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.569623 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-kube-api-access-2kpfw" (OuterVolumeSpecName: "kube-api-access-2kpfw") pod "234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" (UID: "234ed292-a2b6-4856-86ab-3c5d7c6cd5f9"). InnerVolumeSpecName "kube-api-access-2kpfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.591854 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" (UID: "234ed292-a2b6-4856-86ab-3c5d7c6cd5f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.599628 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-config-data" (OuterVolumeSpecName: "config-data") pod "234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" (UID: "234ed292-a2b6-4856-86ab-3c5d7c6cd5f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.666470 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.666499 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.666508 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kpfw\" (UniqueName: \"kubernetes.io/projected/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9-kube-api-access-2kpfw\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.741244 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:19 crc kubenswrapper[5007]: W0218 19:59:19.746634 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ba44459_02df_4479_a9bb_ad0a9864e6f6.slice/crio-364ebeb9debb2916017b170bb9ec002b43e57df0376ed838c74b7b6671cbea85 WatchSource:0}: Error finding container 364ebeb9debb2916017b170bb9ec002b43e57df0376ed838c74b7b6671cbea85: Status 404 returned error can't find the container with id 364ebeb9debb2916017b170bb9ec002b43e57df0376ed838c74b7b6671cbea85 Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.771680 5007 generic.go:334] "Generic (PLEG): container finished" podID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerID="02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972" exitCode=0 Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.771715 5007 generic.go:334] "Generic (PLEG): container finished" podID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerID="de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5" exitCode=143 Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.771795 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9","Type":"ContainerDied","Data":"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972"} Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.771848 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9","Type":"ContainerDied","Data":"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5"} Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.771860 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"234ed292-a2b6-4856-86ab-3c5d7c6cd5f9","Type":"ContainerDied","Data":"4e20fc605a3e3a507f2eaa9edd3c44595a70a7943e7c1769dd4325d3fdc7a8b7"} Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.771893 5007 scope.go:117] "RemoveContainer" containerID="02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.773531 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.779499 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerStarted","Data":"364ebeb9debb2916017b170bb9ec002b43e57df0376ed838c74b7b6671cbea85"} Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.832894 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.859798 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.875486 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:19 crc kubenswrapper[5007]: E0218 19:59:19.875881 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerName="nova-metadata-log" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.875898 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerName="nova-metadata-log" Feb 18 19:59:19 crc kubenswrapper[5007]: E0218 19:59:19.875913 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerName="nova-metadata-metadata" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.875919 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerName="nova-metadata-metadata" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.876112 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerName="nova-metadata-metadata" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.876130 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" containerName="nova-metadata-log" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.877262 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.881140 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.882198 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.885904 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.922149 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234ed292-a2b6-4856-86ab-3c5d7c6cd5f9" path="/var/lib/kubelet/pods/234ed292-a2b6-4856-86ab-3c5d7c6cd5f9/volumes" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.922803 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9152855-158a-4ba6-b099-e65761296526" path="/var/lib/kubelet/pods/b9152855-158a-4ba6-b099-e65761296526/volumes" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.976493 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.976594 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnb98\" (UniqueName: \"kubernetes.io/projected/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-kube-api-access-bnb98\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.976645 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-config-data\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.976704 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:19 crc kubenswrapper[5007]: I0218 19:59:19.976911 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-logs\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.046158 5007 scope.go:117] "RemoveContainer" containerID="de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.074532 5007 scope.go:117] "RemoveContainer" containerID="02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972" Feb 18 19:59:20 crc kubenswrapper[5007]: E0218 19:59:20.074991 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972\": container with ID starting with 02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972 not found: ID does not exist" containerID="02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.075021 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972"} err="failed to get container status \"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972\": rpc error: code = NotFound desc = could not find container \"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972\": container with ID starting with 02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972 not found: ID does not exist" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.075044 5007 scope.go:117] "RemoveContainer" containerID="de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5" Feb 18 19:59:20 crc kubenswrapper[5007]: E0218 19:59:20.075276 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5\": container with ID starting with de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5 not found: ID does not exist" containerID="de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.075310 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5"} err="failed to get container status \"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5\": rpc error: code = NotFound desc = could not find container \"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5\": container with ID starting with de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5 not found: ID does not exist" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.075331 5007 scope.go:117] "RemoveContainer" containerID="02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.075571 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972"} err="failed to get container status \"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972\": rpc error: code = NotFound desc = could not find container \"02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972\": container with ID starting with 02bb0a884a1338006e6e58d02a4d4950613c053a7e365250d636984cc3027972 not found: ID does not exist" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.075613 5007 scope.go:117] "RemoveContainer" containerID="de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.075942 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5"} err="failed to get container status \"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5\": rpc error: code = NotFound desc = could not find container \"de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5\": container with ID starting with de41166d1fbe834fa96cde4a323db678320a94ca705988ebb66a89fc721698e5 not found: ID does not exist" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.077706 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnb98\" (UniqueName: \"kubernetes.io/projected/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-kube-api-access-bnb98\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.077808 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-config-data\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.077922 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.078026 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-logs\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.078087 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.078701 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-logs\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.085184 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-config-data\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.094010 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.094004 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.097199 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnb98\" (UniqueName: \"kubernetes.io/projected/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-kube-api-access-bnb98\") pod \"nova-metadata-0\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.254317 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.790258 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerStarted","Data":"7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906"} Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.790526 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerStarted","Data":"0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25"} Feb 18 19:59:20 crc kubenswrapper[5007]: I0218 19:59:20.814510 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:20 crc kubenswrapper[5007]: W0218 19:59:20.819659 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffadfdfd_e7d0_4f2a_9c8a_ffd4b0b00ad2.slice/crio-fff834bef728229c60c88b531c53edeaed90d0c06366480757bda433223d653c WatchSource:0}: Error finding container fff834bef728229c60c88b531c53edeaed90d0c06366480757bda433223d653c: Status 404 returned error can't find the container with id fff834bef728229c60c88b531c53edeaed90d0c06366480757bda433223d653c Feb 18 19:59:21 crc kubenswrapper[5007]: I0218 19:59:21.811134 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerStarted","Data":"2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6"} Feb 18 19:59:21 crc kubenswrapper[5007]: I0218 19:59:21.813761 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2","Type":"ContainerStarted","Data":"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75"} Feb 18 19:59:21 crc kubenswrapper[5007]: I0218 19:59:21.813800 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2","Type":"ContainerStarted","Data":"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770"} Feb 18 19:59:21 crc kubenswrapper[5007]: I0218 19:59:21.813819 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2","Type":"ContainerStarted","Data":"fff834bef728229c60c88b531c53edeaed90d0c06366480757bda433223d653c"} Feb 18 19:59:21 crc kubenswrapper[5007]: I0218 19:59:21.844230 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.844168862 podStartE2EDuration="2.844168862s" podCreationTimestamp="2026-02-18 19:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:21.832483345 +0000 UTC m=+1246.614602959" watchObservedRunningTime="2026-02-18 19:59:21.844168862 +0000 UTC m=+1246.626288416" Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.204826 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.204866 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.221124 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.391687 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.489733 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66c5ff9dd9-cjthg"] Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.490127 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" podUID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" containerName="dnsmasq-dns" containerID="cri-o://71ed0c12e49a861bce007f9cbe64c13a43f0d4d8d87eba94f4e6967732eb2b05" gracePeriod=10 Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.629084 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.847000 5007 generic.go:334] "Generic (PLEG): container finished" podID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" containerID="71ed0c12e49a861bce007f9cbe64c13a43f0d4d8d87eba94f4e6967732eb2b05" exitCode=0 Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.847062 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" event={"ID":"be7a20a5-350e-4f7d-8532-7a6e7198d17f","Type":"ContainerDied","Data":"71ed0c12e49a861bce007f9cbe64c13a43f0d4d8d87eba94f4e6967732eb2b05"} Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.848688 5007 generic.go:334] "Generic (PLEG): container finished" podID="1c44e9b1-0ed5-466d-9602-e53210c2c34a" containerID="f3967271c77a886891ecd8f64ea386036c748a9e34033181547499f3b5ae415c" exitCode=0 Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.849524 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zz4g" event={"ID":"1c44e9b1-0ed5-466d-9602-e53210c2c34a","Type":"ContainerDied","Data":"f3967271c77a886891ecd8f64ea386036c748a9e34033181547499f3b5ae415c"} Feb 18 19:59:23 crc kubenswrapper[5007]: I0218 19:59:23.926175 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.186846 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.195033 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-nb\") pod \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.195249 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-sb\") pod \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.195289 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-svc\") pod \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.195349 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bq2\" (UniqueName: \"kubernetes.io/projected/be7a20a5-350e-4f7d-8532-7a6e7198d17f-kube-api-access-c9bq2\") pod \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.195382 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-swift-storage-0\") pod \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.195404 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-config\") pod \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\" (UID: \"be7a20a5-350e-4f7d-8532-7a6e7198d17f\") " Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.203570 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.206532 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.274282 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be7a20a5-350e-4f7d-8532-7a6e7198d17f" (UID: "be7a20a5-350e-4f7d-8532-7a6e7198d17f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.279390 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be7a20a5-350e-4f7d-8532-7a6e7198d17f" (UID: "be7a20a5-350e-4f7d-8532-7a6e7198d17f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.290819 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-config" (OuterVolumeSpecName: "config") pod "be7a20a5-350e-4f7d-8532-7a6e7198d17f" (UID: "be7a20a5-350e-4f7d-8532-7a6e7198d17f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.290943 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be7a20a5-350e-4f7d-8532-7a6e7198d17f" (UID: "be7a20a5-350e-4f7d-8532-7a6e7198d17f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.296692 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be7a20a5-350e-4f7d-8532-7a6e7198d17f" (UID: "be7a20a5-350e-4f7d-8532-7a6e7198d17f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.296977 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7a20a5-350e-4f7d-8532-7a6e7198d17f-kube-api-access-c9bq2" (OuterVolumeSpecName: "kube-api-access-c9bq2") pod "be7a20a5-350e-4f7d-8532-7a6e7198d17f" (UID: "be7a20a5-350e-4f7d-8532-7a6e7198d17f"). InnerVolumeSpecName "kube-api-access-c9bq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.297266 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.297281 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.297291 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9bq2\" (UniqueName: \"kubernetes.io/projected/be7a20a5-350e-4f7d-8532-7a6e7198d17f-kube-api-access-c9bq2\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.297304 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.297313 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.297321 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be7a20a5-350e-4f7d-8532-7a6e7198d17f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.860108 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" event={"ID":"be7a20a5-350e-4f7d-8532-7a6e7198d17f","Type":"ContainerDied","Data":"869d4c56267a4b99f21b5cac4c69f6f917c181e58b5b2ecdb789a18b1461dcbb"} Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.860391 5007 scope.go:117] "RemoveContainer" containerID="71ed0c12e49a861bce007f9cbe64c13a43f0d4d8d87eba94f4e6967732eb2b05" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.860181 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c5ff9dd9-cjthg" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.864714 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerStarted","Data":"e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab"} Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.864743 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.902819 5007 scope.go:117] "RemoveContainer" containerID="bc7510da3ceaa6d8a006bc3a0c7566c16a11808254150d6ea12897ee3a3be11a" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.914382 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.01218289 podStartE2EDuration="6.91436056s" podCreationTimestamp="2026-02-18 19:59:18 +0000 UTC" firstStartedPulling="2026-02-18 19:59:19.749040718 +0000 UTC m=+1244.531160242" lastFinishedPulling="2026-02-18 19:59:23.651218388 +0000 UTC m=+1248.433337912" observedRunningTime="2026-02-18 19:59:24.897377247 +0000 UTC m=+1249.679496771" watchObservedRunningTime="2026-02-18 19:59:24.91436056 +0000 UTC m=+1249.696480094" Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.928474 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66c5ff9dd9-cjthg"] Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.935541 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66c5ff9dd9-cjthg"] Feb 18 19:59:24 crc kubenswrapper[5007]: I0218 19:59:24.971845 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.258563 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.258677 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.340272 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.431664 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjcwt\" (UniqueName: \"kubernetes.io/projected/1c44e9b1-0ed5-466d-9602-e53210c2c34a-kube-api-access-kjcwt\") pod \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.431742 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-config-data\") pod \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.431815 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-combined-ca-bundle\") pod \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.431839 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-scripts\") pod \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\" (UID: \"1c44e9b1-0ed5-466d-9602-e53210c2c34a\") " Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.455543 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-scripts" (OuterVolumeSpecName: "scripts") pod "1c44e9b1-0ed5-466d-9602-e53210c2c34a" (UID: "1c44e9b1-0ed5-466d-9602-e53210c2c34a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.457494 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c44e9b1-0ed5-466d-9602-e53210c2c34a-kube-api-access-kjcwt" (OuterVolumeSpecName: "kube-api-access-kjcwt") pod "1c44e9b1-0ed5-466d-9602-e53210c2c34a" (UID: "1c44e9b1-0ed5-466d-9602-e53210c2c34a"). InnerVolumeSpecName "kube-api-access-kjcwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.461258 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-config-data" (OuterVolumeSpecName: "config-data") pod "1c44e9b1-0ed5-466d-9602-e53210c2c34a" (UID: "1c44e9b1-0ed5-466d-9602-e53210c2c34a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.468926 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c44e9b1-0ed5-466d-9602-e53210c2c34a" (UID: "1c44e9b1-0ed5-466d-9602-e53210c2c34a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.534387 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.534449 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.534469 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjcwt\" (UniqueName: \"kubernetes.io/projected/1c44e9b1-0ed5-466d-9602-e53210c2c34a-kube-api-access-kjcwt\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.534484 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c44e9b1-0ed5-466d-9602-e53210c2c34a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.874809 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zz4g" event={"ID":"1c44e9b1-0ed5-466d-9602-e53210c2c34a","Type":"ContainerDied","Data":"c16bc443d027a433e818186df93252c817eefab7d6601aa5c3716bb37144d868"} Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.875159 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16bc443d027a433e818186df93252c817eefab7d6601aa5c3716bb37144d868" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.875597 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zz4g" Feb 18 19:59:25 crc kubenswrapper[5007]: I0218 19:59:25.922593 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" path="/var/lib/kubelet/pods/be7a20a5-350e-4f7d-8532-7a6e7198d17f/volumes" Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.032332 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.032594 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-log" containerID="cri-o://90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120" gracePeriod=30 Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.032726 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-api" containerID="cri-o://3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43" gracePeriod=30 Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.070226 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.070418 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" containerName="nova-scheduler-scheduler" containerID="cri-o://06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb" gracePeriod=30 Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.083369 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.888757 5007 generic.go:334] "Generic (PLEG): container finished" podID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerID="90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120" exitCode=143 Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.888792 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5218352-e732-4b65-b885-8fb14b5b6e35","Type":"ContainerDied","Data":"90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120"} Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.888980 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerName="nova-metadata-log" containerID="cri-o://d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770" gracePeriod=30 Feb 18 19:59:26 crc kubenswrapper[5007]: I0218 19:59:26.889026 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerName="nova-metadata-metadata" containerID="cri-o://54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75" gracePeriod=30 Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.498455 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.684358 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-logs\") pod \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.684486 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-config-data\") pod \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.684559 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnb98\" (UniqueName: \"kubernetes.io/projected/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-kube-api-access-bnb98\") pod \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.684624 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-combined-ca-bundle\") pod \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.684646 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-nova-metadata-tls-certs\") pod \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\" (UID: \"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2\") " Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.684820 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-logs" (OuterVolumeSpecName: "logs") pod "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" (UID: "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.685129 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.689596 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-kube-api-access-bnb98" (OuterVolumeSpecName: "kube-api-access-bnb98") pod "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" (UID: "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2"). InnerVolumeSpecName "kube-api-access-bnb98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.716748 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" (UID: "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.719323 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-config-data" (OuterVolumeSpecName: "config-data") pod "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" (UID: "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.737130 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" (UID: "ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.787507 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.787534 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnb98\" (UniqueName: \"kubernetes.io/projected/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-kube-api-access-bnb98\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.787544 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.787552 5007 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.896113 5007 generic.go:334] "Generic (PLEG): container finished" podID="b07421c1-d538-4d07-92d5-19e6ffcd0b3e" containerID="93db4e202598458bd66cb8df69cb4c3d2b2eff78a59f7972ecfd302a3dc92921" exitCode=0 Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.896166 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" event={"ID":"b07421c1-d538-4d07-92d5-19e6ffcd0b3e","Type":"ContainerDied","Data":"93db4e202598458bd66cb8df69cb4c3d2b2eff78a59f7972ecfd302a3dc92921"} Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.897594 5007 generic.go:334] "Generic (PLEG): container finished" podID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerID="54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75" exitCode=0 Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.897613 5007 generic.go:334] "Generic (PLEG): container finished" podID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerID="d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770" exitCode=143 Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.897627 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2","Type":"ContainerDied","Data":"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75"} Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.897641 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2","Type":"ContainerDied","Data":"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770"} Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.897650 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2","Type":"ContainerDied","Data":"fff834bef728229c60c88b531c53edeaed90d0c06366480757bda433223d653c"} Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.897664 5007 scope.go:117] "RemoveContainer" containerID="54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.897751 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.936152 5007 scope.go:117] "RemoveContainer" containerID="d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.948665 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.963515 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.970152 5007 scope.go:117] "RemoveContainer" containerID="54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75" Feb 18 19:59:27 crc kubenswrapper[5007]: E0218 19:59:27.970619 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75\": container with ID starting with 54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75 not found: ID does not exist" containerID="54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.970661 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75"} err="failed to get container status \"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75\": rpc error: code = NotFound desc = could not find container \"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75\": container with ID starting with 54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75 not found: ID does not exist" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.970687 5007 scope.go:117] "RemoveContainer" containerID="d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770" Feb 18 19:59:27 crc kubenswrapper[5007]: E0218 19:59:27.970904 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770\": container with ID starting with d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770 not found: ID does not exist" containerID="d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.970923 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770"} err="failed to get container status \"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770\": rpc error: code = NotFound desc = could not find container \"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770\": container with ID starting with d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770 not found: ID does not exist" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.970940 5007 scope.go:117] "RemoveContainer" containerID="54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.971087 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75"} err="failed to get container status \"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75\": rpc error: code = NotFound desc = could not find container \"54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75\": container with ID starting with 54c27f592ad45102d36efb4094e223ac360fee5d5411e69acb753c0d488e5a75 not found: ID does not exist" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.971111 5007 scope.go:117] "RemoveContainer" containerID="d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.971299 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770"} err="failed to get container status \"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770\": rpc error: code = NotFound desc = could not find container \"d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770\": container with ID starting with d511a054095236dc5b3732e1ebe17f8f7bd01e03703b0dddd47de8f81f293770 not found: ID does not exist" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.975537 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:27 crc kubenswrapper[5007]: E0218 19:59:27.976170 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerName="nova-metadata-metadata" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976203 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerName="nova-metadata-metadata" Feb 18 19:59:27 crc kubenswrapper[5007]: E0218 19:59:27.976276 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerName="nova-metadata-log" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976292 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerName="nova-metadata-log" Feb 18 19:59:27 crc kubenswrapper[5007]: E0218 19:59:27.976320 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" containerName="init" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976335 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" containerName="init" Feb 18 19:59:27 crc kubenswrapper[5007]: E0218 19:59:27.976351 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c44e9b1-0ed5-466d-9602-e53210c2c34a" containerName="nova-manage" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976363 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c44e9b1-0ed5-466d-9602-e53210c2c34a" containerName="nova-manage" Feb 18 19:59:27 crc kubenswrapper[5007]: E0218 19:59:27.976386 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" containerName="dnsmasq-dns" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976397 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" containerName="dnsmasq-dns" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976746 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerName="nova-metadata-metadata" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976779 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7a20a5-350e-4f7d-8532-7a6e7198d17f" containerName="dnsmasq-dns" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976799 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" containerName="nova-metadata-log" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.976837 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c44e9b1-0ed5-466d-9602-e53210c2c34a" containerName="nova-manage" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.978322 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.980518 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:59:27 crc kubenswrapper[5007]: I0218 19:59:27.980527 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.005165 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.092375 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcz2j\" (UniqueName: \"kubernetes.io/projected/9773272a-428b-4151-9c85-ce97c98a467e-kube-api-access-gcz2j\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.092722 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.092786 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-config-data\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.092833 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9773272a-428b-4151-9c85-ce97c98a467e-logs\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.092905 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.195029 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.195104 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcz2j\" (UniqueName: \"kubernetes.io/projected/9773272a-428b-4151-9c85-ce97c98a467e-kube-api-access-gcz2j\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.195208 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.195262 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-config-data\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.195909 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9773272a-428b-4151-9c85-ce97c98a467e-logs\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.195307 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9773272a-428b-4151-9c85-ce97c98a467e-logs\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.200330 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-config-data\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.201568 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.201832 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.228125 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcz2j\" (UniqueName: \"kubernetes.io/projected/9773272a-428b-4151-9c85-ce97c98a467e-kube-api-access-gcz2j\") pod \"nova-metadata-0\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: E0218 19:59:28.229852 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:59:28 crc kubenswrapper[5007]: E0218 19:59:28.242947 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:59:28 crc kubenswrapper[5007]: E0218 19:59:28.245128 5007 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:59:28 crc kubenswrapper[5007]: E0218 19:59:28.245190 5007 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" containerName="nova-scheduler-scheduler" Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.297488 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:59:28 crc kubenswrapper[5007]: W0218 19:59:28.798222 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9773272a_428b_4151_9c85_ce97c98a467e.slice/crio-8460680d7c9f5b2ee2006e8a7bd88141886bf9d05820c1b18691f26f49d0ef38 WatchSource:0}: Error finding container 8460680d7c9f5b2ee2006e8a7bd88141886bf9d05820c1b18691f26f49d0ef38: Status 404 returned error can't find the container with id 8460680d7c9f5b2ee2006e8a7bd88141886bf9d05820c1b18691f26f49d0ef38 Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.812138 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:59:28 crc kubenswrapper[5007]: I0218 19:59:28.917490 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9773272a-428b-4151-9c85-ce97c98a467e","Type":"ContainerStarted","Data":"8460680d7c9f5b2ee2006e8a7bd88141886bf9d05820c1b18691f26f49d0ef38"} Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.337412 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.425861 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-config-data\") pod \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.425948 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-scripts\") pod \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.430207 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-scripts" (OuterVolumeSpecName: "scripts") pod "b07421c1-d538-4d07-92d5-19e6ffcd0b3e" (UID: "b07421c1-d538-4d07-92d5-19e6ffcd0b3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.465029 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-config-data" (OuterVolumeSpecName: "config-data") pod "b07421c1-d538-4d07-92d5-19e6ffcd0b3e" (UID: "b07421c1-d538-4d07-92d5-19e6ffcd0b3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.527212 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r4ff\" (UniqueName: \"kubernetes.io/projected/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-kube-api-access-5r4ff\") pod \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.527256 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-combined-ca-bundle\") pod \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\" (UID: \"b07421c1-d538-4d07-92d5-19e6ffcd0b3e\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.527874 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.527891 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.529942 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-kube-api-access-5r4ff" (OuterVolumeSpecName: "kube-api-access-5r4ff") pod "b07421c1-d538-4d07-92d5-19e6ffcd0b3e" (UID: "b07421c1-d538-4d07-92d5-19e6ffcd0b3e"). InnerVolumeSpecName "kube-api-access-5r4ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.558017 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b07421c1-d538-4d07-92d5-19e6ffcd0b3e" (UID: "b07421c1-d538-4d07-92d5-19e6ffcd0b3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.615502 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.629506 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-config-data\") pod \"f5218352-e732-4b65-b885-8fb14b5b6e35\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.630716 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r4ff\" (UniqueName: \"kubernetes.io/projected/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-kube-api-access-5r4ff\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.630751 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07421c1-d538-4d07-92d5-19e6ffcd0b3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.669323 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-config-data" (OuterVolumeSpecName: "config-data") pod "f5218352-e732-4b65-b885-8fb14b5b6e35" (UID: "f5218352-e732-4b65-b885-8fb14b5b6e35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.731149 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-combined-ca-bundle\") pod \"f5218352-e732-4b65-b885-8fb14b5b6e35\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.731400 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh4pt\" (UniqueName: \"kubernetes.io/projected/f5218352-e732-4b65-b885-8fb14b5b6e35-kube-api-access-sh4pt\") pod \"f5218352-e732-4b65-b885-8fb14b5b6e35\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.731523 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5218352-e732-4b65-b885-8fb14b5b6e35-logs\") pod \"f5218352-e732-4b65-b885-8fb14b5b6e35\" (UID: \"f5218352-e732-4b65-b885-8fb14b5b6e35\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.731842 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.731979 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5218352-e732-4b65-b885-8fb14b5b6e35-logs" (OuterVolumeSpecName: "logs") pod "f5218352-e732-4b65-b885-8fb14b5b6e35" (UID: "f5218352-e732-4b65-b885-8fb14b5b6e35"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.734927 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5218352-e732-4b65-b885-8fb14b5b6e35-kube-api-access-sh4pt" (OuterVolumeSpecName: "kube-api-access-sh4pt") pod "f5218352-e732-4b65-b885-8fb14b5b6e35" (UID: "f5218352-e732-4b65-b885-8fb14b5b6e35"). InnerVolumeSpecName "kube-api-access-sh4pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.779631 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5218352-e732-4b65-b885-8fb14b5b6e35" (UID: "f5218352-e732-4b65-b885-8fb14b5b6e35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.834078 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh4pt\" (UniqueName: \"kubernetes.io/projected/f5218352-e732-4b65-b885-8fb14b5b6e35-kube-api-access-sh4pt\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.834117 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5218352-e732-4b65-b885-8fb14b5b6e35-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.834133 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5218352-e732-4b65-b885-8fb14b5b6e35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.909270 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.919342 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2" path="/var/lib/kubelet/pods/ffadfdfd-e7d0-4f2a-9c8a-ffd4b0b00ad2/volumes" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.926479 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.926470 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xbb2l" event={"ID":"b07421c1-d538-4d07-92d5-19e6ffcd0b3e","Type":"ContainerDied","Data":"821c6d39cf4fb73e35423595dd9f40507271d85a108ab04b28ee7b24882cf931"} Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.926577 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821c6d39cf4fb73e35423595dd9f40507271d85a108ab04b28ee7b24882cf931" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.927874 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.927849 5007 generic.go:334] "Generic (PLEG): container finished" podID="1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" containerID="06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb" exitCode=0 Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.927948 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5","Type":"ContainerDied","Data":"06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb"} Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.927992 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5","Type":"ContainerDied","Data":"afd2b7e3e304d59a4d680d824ef5a7ac2115a315dbb45773019df9fcea436137"} Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.928009 5007 scope.go:117] "RemoveContainer" containerID="06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.935857 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-config-data\") pod \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.935894 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-combined-ca-bundle\") pod \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.935942 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4rp7\" (UniqueName: \"kubernetes.io/projected/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-kube-api-access-l4rp7\") pod \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\" (UID: \"1b00f2e4-075e-4db2-a1da-9a43c08b8ec5\") " Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.948318 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9773272a-428b-4151-9c85-ce97c98a467e","Type":"ContainerStarted","Data":"3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b"} Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.948711 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9773272a-428b-4151-9c85-ce97c98a467e","Type":"ContainerStarted","Data":"b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75"} Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.950960 5007 generic.go:334] "Generic (PLEG): container finished" podID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerID="3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43" exitCode=0 Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.950999 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5218352-e732-4b65-b885-8fb14b5b6e35","Type":"ContainerDied","Data":"3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43"} Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.951023 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f5218352-e732-4b65-b885-8fb14b5b6e35","Type":"ContainerDied","Data":"c58fe271fd3a0ecc6d8225ff71219fb34d478c6d71ebfcfa49e80f6dfbb9f3ab"} Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.951068 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.961544 5007 scope.go:117] "RemoveContainer" containerID="06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb" Feb 18 19:59:29 crc kubenswrapper[5007]: E0218 19:59:29.961988 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb\": container with ID starting with 06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb not found: ID does not exist" containerID="06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.962020 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb"} err="failed to get container status \"06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb\": rpc error: code = NotFound desc = could not find container \"06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb\": container with ID starting with 06ee9c7c16672e4a9ce111ebb7f69de32a397aae6efdf8f483c10d92bb8710bb not found: ID does not exist" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.962046 5007 scope.go:117] "RemoveContainer" containerID="3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.966634 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-kube-api-access-l4rp7" (OuterVolumeSpecName: "kube-api-access-l4rp7") pod "1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" (UID: "1b00f2e4-075e-4db2-a1da-9a43c08b8ec5"). InnerVolumeSpecName "kube-api-access-l4rp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.988486 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-config-data" (OuterVolumeSpecName: "config-data") pod "1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" (UID: "1b00f2e4-075e-4db2-a1da-9a43c08b8ec5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.993353 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" (UID: "1b00f2e4-075e-4db2-a1da-9a43c08b8ec5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:29 crc kubenswrapper[5007]: I0218 19:59:29.994060 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.99404407 podStartE2EDuration="2.99404407s" podCreationTimestamp="2026-02-18 19:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:29.968911326 +0000 UTC m=+1254.751030890" watchObservedRunningTime="2026-02-18 19:59:29.99404407 +0000 UTC m=+1254.776163594" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.001408 5007 scope.go:117] "RemoveContainer" containerID="90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.008141 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.034328 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.037959 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.038233 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.038331 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4rp7\" (UniqueName: \"kubernetes.io/projected/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5-kube-api-access-l4rp7\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.054567 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: E0218 19:59:30.055021 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07421c1-d538-4d07-92d5-19e6ffcd0b3e" containerName="nova-cell1-conductor-db-sync" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.055036 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07421c1-d538-4d07-92d5-19e6ffcd0b3e" containerName="nova-cell1-conductor-db-sync" Feb 18 19:59:30 crc kubenswrapper[5007]: E0218 19:59:30.055049 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-api" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.055055 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-api" Feb 18 19:59:30 crc kubenswrapper[5007]: E0218 19:59:30.055067 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-log" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.055073 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-log" Feb 18 19:59:30 crc kubenswrapper[5007]: E0218 19:59:30.055088 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" containerName="nova-scheduler-scheduler" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.055093 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" containerName="nova-scheduler-scheduler" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.055264 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07421c1-d538-4d07-92d5-19e6ffcd0b3e" containerName="nova-cell1-conductor-db-sync" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.055280 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-log" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.055299 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" containerName="nova-scheduler-scheduler" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.055307 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" containerName="nova-api-api" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.056314 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.060335 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.067288 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.068580 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.074064 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.077913 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.102619 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.109586 5007 scope.go:117] "RemoveContainer" containerID="3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43" Feb 18 19:59:30 crc kubenswrapper[5007]: E0218 19:59:30.110049 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43\": container with ID starting with 3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43 not found: ID does not exist" containerID="3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.110076 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43"} err="failed to get container status \"3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43\": rpc error: code = NotFound desc = could not find container \"3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43\": container with ID starting with 3349cf5bc2e5ef36e3bef58d7850cdc456ea01159acc77f9df8f2ee3a8828d43 not found: ID does not exist" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.110099 5007 scope.go:117] "RemoveContainer" containerID="90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120" Feb 18 19:59:30 crc kubenswrapper[5007]: E0218 19:59:30.110584 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120\": container with ID starting with 90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120 not found: ID does not exist" containerID="90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.110637 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120"} err="failed to get container status \"90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120\": rpc error: code = NotFound desc = could not find container \"90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120\": container with ID starting with 90db0617c69fa09a94fdbb253a1237a5b2d1fc4c0d106fa768c196ec8a5ab120 not found: ID does not exist" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.139782 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt78h\" (UniqueName: \"kubernetes.io/projected/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-kube-api-access-tt78h\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.139826 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5gz\" (UniqueName: \"kubernetes.io/projected/7d580b56-bf07-4864-bfd9-8118be0a239c-kube-api-access-wn5gz\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.139852 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d580b56-bf07-4864-bfd9-8118be0a239c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.139904 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-config-data\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.139941 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d580b56-bf07-4864-bfd9-8118be0a239c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.139965 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-logs\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.140037 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.241605 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt78h\" (UniqueName: \"kubernetes.io/projected/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-kube-api-access-tt78h\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.241906 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5gz\" (UniqueName: \"kubernetes.io/projected/7d580b56-bf07-4864-bfd9-8118be0a239c-kube-api-access-wn5gz\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.242006 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d580b56-bf07-4864-bfd9-8118be0a239c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.242159 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-config-data\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.242775 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d580b56-bf07-4864-bfd9-8118be0a239c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.244607 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-logs\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.244943 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.245407 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-logs\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.252996 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.254097 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d580b56-bf07-4864-bfd9-8118be0a239c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.257127 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d580b56-bf07-4864-bfd9-8118be0a239c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.258192 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-config-data\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.270983 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt78h\" (UniqueName: \"kubernetes.io/projected/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-kube-api-access-tt78h\") pod \"nova-api-0\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.274902 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5gz\" (UniqueName: \"kubernetes.io/projected/7d580b56-bf07-4864-bfd9-8118be0a239c-kube-api-access-wn5gz\") pod \"nova-cell1-conductor-0\" (UID: \"7d580b56-bf07-4864-bfd9-8118be0a239c\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.286296 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.298107 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.319654 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.320785 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.327122 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.346406 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-config-data\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.346716 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m57cv\" (UniqueName: \"kubernetes.io/projected/587de3e4-71d0-4a0f-8312-01fcc185dd4f-kube-api-access-m57cv\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.346869 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.354123 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.412160 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.431034 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.448708 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m57cv\" (UniqueName: \"kubernetes.io/projected/587de3e4-71d0-4a0f-8312-01fcc185dd4f-kube-api-access-m57cv\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.448805 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.448914 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-config-data\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.453840 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-config-data\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.454691 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.468626 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m57cv\" (UniqueName: \"kubernetes.io/projected/587de3e4-71d0-4a0f-8312-01fcc185dd4f-kube-api-access-m57cv\") pod \"nova-scheduler-0\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.673233 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.894889 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: W0218 19:59:30.899195 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfad1ce8d_e077_4554_9b1e_6e8eeeb264ad.slice/crio-77e227a37f1d004b49ffc5f00f8a9ad03d1a4670d2d3259fbe22a37006eb0ced WatchSource:0}: Error finding container 77e227a37f1d004b49ffc5f00f8a9ad03d1a4670d2d3259fbe22a37006eb0ced: Status 404 returned error can't find the container with id 77e227a37f1d004b49ffc5f00f8a9ad03d1a4670d2d3259fbe22a37006eb0ced Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.967402 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad","Type":"ContainerStarted","Data":"77e227a37f1d004b49ffc5f00f8a9ad03d1a4670d2d3259fbe22a37006eb0ced"} Feb 18 19:59:30 crc kubenswrapper[5007]: I0218 19:59:30.985447 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:59:30 crc kubenswrapper[5007]: W0218 19:59:30.987231 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d580b56_bf07_4864_bfd9_8118be0a239c.slice/crio-2493eb16b1d78191039bb0480a62ab1bcf6e3c62af10a024009c985462012a9b WatchSource:0}: Error finding container 2493eb16b1d78191039bb0480a62ab1bcf6e3c62af10a024009c985462012a9b: Status 404 returned error can't find the container with id 2493eb16b1d78191039bb0480a62ab1bcf6e3c62af10a024009c985462012a9b Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.148298 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:59:31 crc kubenswrapper[5007]: W0218 19:59:31.150941 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod587de3e4_71d0_4a0f_8312_01fcc185dd4f.slice/crio-4aa7b63fd06d41845609696ca3415e36e5bb640514c8ec287c266067e677f6c4 WatchSource:0}: Error finding container 4aa7b63fd06d41845609696ca3415e36e5bb640514c8ec287c266067e677f6c4: Status 404 returned error can't find the container with id 4aa7b63fd06d41845609696ca3415e36e5bb640514c8ec287c266067e677f6c4 Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.920362 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b00f2e4-075e-4db2-a1da-9a43c08b8ec5" path="/var/lib/kubelet/pods/1b00f2e4-075e-4db2-a1da-9a43c08b8ec5/volumes" Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.921217 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5218352-e732-4b65-b885-8fb14b5b6e35" path="/var/lib/kubelet/pods/f5218352-e732-4b65-b885-8fb14b5b6e35/volumes" Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.981975 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad","Type":"ContainerStarted","Data":"8004b0ec6536f04393073ee3df1a74a94bcf769a5a962f3df60ba7de300efe20"} Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.982019 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad","Type":"ContainerStarted","Data":"85d8ad0eea5f65180fc0764131ec553b85b8ed4028918079b2460c2f79179c76"} Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.983816 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7d580b56-bf07-4864-bfd9-8118be0a239c","Type":"ContainerStarted","Data":"3d9366ae39f7853e41f3ec7b687e85714a524bb42fdd2f178fc066c7460aba1c"} Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.983879 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.983895 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7d580b56-bf07-4864-bfd9-8118be0a239c","Type":"ContainerStarted","Data":"2493eb16b1d78191039bb0480a62ab1bcf6e3c62af10a024009c985462012a9b"} Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.987341 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"587de3e4-71d0-4a0f-8312-01fcc185dd4f","Type":"ContainerStarted","Data":"edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189"} Feb 18 19:59:31 crc kubenswrapper[5007]: I0218 19:59:31.987374 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"587de3e4-71d0-4a0f-8312-01fcc185dd4f","Type":"ContainerStarted","Data":"4aa7b63fd06d41845609696ca3415e36e5bb640514c8ec287c266067e677f6c4"} Feb 18 19:59:32 crc kubenswrapper[5007]: I0218 19:59:32.023764 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.023743345 podStartE2EDuration="2.023743345s" podCreationTimestamp="2026-02-18 19:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:32.013083631 +0000 UTC m=+1256.795203185" watchObservedRunningTime="2026-02-18 19:59:32.023743345 +0000 UTC m=+1256.805862869" Feb 18 19:59:32 crc kubenswrapper[5007]: I0218 19:59:32.037058 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.037040111 podStartE2EDuration="2.037040111s" podCreationTimestamp="2026-02-18 19:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:32.030209546 +0000 UTC m=+1256.812329070" watchObservedRunningTime="2026-02-18 19:59:32.037040111 +0000 UTC m=+1256.819159625" Feb 18 19:59:32 crc kubenswrapper[5007]: I0218 19:59:32.051250 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.051223599 podStartE2EDuration="2.051223599s" podCreationTimestamp="2026-02-18 19:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:32.045892422 +0000 UTC m=+1256.828011956" watchObservedRunningTime="2026-02-18 19:59:32.051223599 +0000 UTC m=+1256.833343123" Feb 18 19:59:33 crc kubenswrapper[5007]: I0218 19:59:33.298109 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:59:33 crc kubenswrapper[5007]: I0218 19:59:33.298488 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:59:35 crc kubenswrapper[5007]: I0218 19:59:35.673390 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:59:38 crc kubenswrapper[5007]: I0218 19:59:38.297857 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:59:38 crc kubenswrapper[5007]: I0218 19:59:38.298407 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:59:39 crc kubenswrapper[5007]: I0218 19:59:39.310614 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:59:39 crc kubenswrapper[5007]: I0218 19:59:39.310702 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:59:40 crc kubenswrapper[5007]: I0218 19:59:40.413159 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:59:40 crc kubenswrapper[5007]: I0218 19:59:40.413239 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:59:40 crc kubenswrapper[5007]: I0218 19:59:40.473904 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 19:59:40 crc kubenswrapper[5007]: I0218 19:59:40.673415 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:59:40 crc kubenswrapper[5007]: I0218 19:59:40.711941 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:59:41 crc kubenswrapper[5007]: I0218 19:59:41.115077 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:59:41 crc kubenswrapper[5007]: I0218 19:59:41.453643 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:59:41 crc kubenswrapper[5007]: I0218 19:59:41.494750 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:59:47 crc kubenswrapper[5007]: E0218 19:59:47.907642 5007 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod998c4c26_6713_456e_af55_7fc1ad00be06.slice/crio-conmon-b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod998c4c26_6713_456e_af55_7fc1ad00be06.slice/crio-b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.121557 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.180048 5007 generic.go:334] "Generic (PLEG): container finished" podID="998c4c26-6713-456e-af55-7fc1ad00be06" containerID="b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f" exitCode=137 Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.180127 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"998c4c26-6713-456e-af55-7fc1ad00be06","Type":"ContainerDied","Data":"b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f"} Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.180179 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"998c4c26-6713-456e-af55-7fc1ad00be06","Type":"ContainerDied","Data":"15622d6e686d3c0adda4d3ecd6f902f7db4485e3565f87f7e9c788597e01b26a"} Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.180219 5007 scope.go:117] "RemoveContainer" containerID="b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.180226 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.217351 5007 scope.go:117] "RemoveContainer" containerID="b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f" Feb 18 19:59:48 crc kubenswrapper[5007]: E0218 19:59:48.217772 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f\": container with ID starting with b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f not found: ID does not exist" containerID="b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.217806 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f"} err="failed to get container status \"b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f\": rpc error: code = NotFound desc = could not find container \"b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f\": container with ID starting with b70bf5b66e1beda0f36c0d6b90fc29525111241b9197dccaa777890238b31f3f not found: ID does not exist" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.247155 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgxhw\" (UniqueName: \"kubernetes.io/projected/998c4c26-6713-456e-af55-7fc1ad00be06-kube-api-access-tgxhw\") pod \"998c4c26-6713-456e-af55-7fc1ad00be06\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.247405 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-config-data\") pod \"998c4c26-6713-456e-af55-7fc1ad00be06\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.248360 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-combined-ca-bundle\") pod \"998c4c26-6713-456e-af55-7fc1ad00be06\" (UID: \"998c4c26-6713-456e-af55-7fc1ad00be06\") " Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.255931 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998c4c26-6713-456e-af55-7fc1ad00be06-kube-api-access-tgxhw" (OuterVolumeSpecName: "kube-api-access-tgxhw") pod "998c4c26-6713-456e-af55-7fc1ad00be06" (UID: "998c4c26-6713-456e-af55-7fc1ad00be06"). InnerVolumeSpecName "kube-api-access-tgxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.289485 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-config-data" (OuterVolumeSpecName: "config-data") pod "998c4c26-6713-456e-af55-7fc1ad00be06" (UID: "998c4c26-6713-456e-af55-7fc1ad00be06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.306380 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.307073 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.318233 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "998c4c26-6713-456e-af55-7fc1ad00be06" (UID: "998c4c26-6713-456e-af55-7fc1ad00be06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.320388 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.351550 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgxhw\" (UniqueName: \"kubernetes.io/projected/998c4c26-6713-456e-af55-7fc1ad00be06-kube-api-access-tgxhw\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.351588 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.351601 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c4c26-6713-456e-af55-7fc1ad00be06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.512527 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.532531 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.549609 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:48 crc kubenswrapper[5007]: E0218 19:59:48.550093 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998c4c26-6713-456e-af55-7fc1ad00be06" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.550114 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="998c4c26-6713-456e-af55-7fc1ad00be06" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.550335 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="998c4c26-6713-456e-af55-7fc1ad00be06" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.551116 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.557855 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.558220 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.565111 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.581802 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.674816 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.674948 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.674976 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.675113 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.675154 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnmk\" (UniqueName: \"kubernetes.io/projected/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-kube-api-access-gmnmk\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.777024 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.777092 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.777279 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.777326 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmnmk\" (UniqueName: \"kubernetes.io/projected/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-kube-api-access-gmnmk\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.777403 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.783821 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.785803 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.787696 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.790574 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.798934 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmnmk\" (UniqueName: \"kubernetes.io/projected/a1fb8c60-9030-48a7-859e-03df9b8c4d5a-kube-api-access-gmnmk\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1fb8c60-9030-48a7-859e-03df9b8c4d5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:48 crc kubenswrapper[5007]: I0218 19:59:48.879908 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:49 crc kubenswrapper[5007]: I0218 19:59:49.195829 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:59:49 crc kubenswrapper[5007]: I0218 19:59:49.267042 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:59:49 crc kubenswrapper[5007]: I0218 19:59:49.366850 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:59:49 crc kubenswrapper[5007]: I0218 19:59:49.920928 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998c4c26-6713-456e-af55-7fc1ad00be06" path="/var/lib/kubelet/pods/998c4c26-6713-456e-af55-7fc1ad00be06/volumes" Feb 18 19:59:50 crc kubenswrapper[5007]: I0218 19:59:50.233202 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a1fb8c60-9030-48a7-859e-03df9b8c4d5a","Type":"ContainerStarted","Data":"0eac4e4cc26332fc08c9a93e85192b8bd689b431ca5debd5d5cd9edcb1e9bb59"} Feb 18 19:59:50 crc kubenswrapper[5007]: I0218 19:59:50.233268 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a1fb8c60-9030-48a7-859e-03df9b8c4d5a","Type":"ContainerStarted","Data":"95f461127439e89884e6fddc1680faf94e1c35a25faa110ae145bcf3079951b3"} Feb 18 19:59:50 crc kubenswrapper[5007]: I0218 19:59:50.258321 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.2582997479999998 podStartE2EDuration="2.258299748s" podCreationTimestamp="2026-02-18 19:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:50.255407412 +0000 UTC m=+1275.037526966" watchObservedRunningTime="2026-02-18 19:59:50.258299748 +0000 UTC m=+1275.040419282" Feb 18 19:59:50 crc kubenswrapper[5007]: I0218 19:59:50.420182 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:59:50 crc kubenswrapper[5007]: I0218 19:59:50.420912 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:59:50 crc kubenswrapper[5007]: I0218 19:59:50.421681 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:59:50 crc kubenswrapper[5007]: I0218 19:59:50.437924 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.244269 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.267354 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.458949 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr"] Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.461145 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.473817 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr"] Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.534247 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gzv\" (UniqueName: \"kubernetes.io/projected/de972c5a-b715-4927-a929-4a36ac39ae34-kube-api-access-r9gzv\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.534294 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-svc\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.534643 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-nb\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.534809 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-sb\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.535007 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-swift-storage-0\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.535263 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-config\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.637387 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-nb\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.637488 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-sb\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.637547 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-swift-storage-0\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.637615 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-config\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.637664 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9gzv\" (UniqueName: \"kubernetes.io/projected/de972c5a-b715-4927-a929-4a36ac39ae34-kube-api-access-r9gzv\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.637684 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-svc\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.638304 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-nb\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.638603 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-svc\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.638887 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-swift-storage-0\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.639154 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-config\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.639397 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-sb\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.672169 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9gzv\" (UniqueName: \"kubernetes.io/projected/de972c5a-b715-4927-a929-4a36ac39ae34-kube-api-access-r9gzv\") pod \"dnsmasq-dns-7ffb4f4f5c-jtgzr\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:51 crc kubenswrapper[5007]: I0218 19:59:51.809102 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:52 crc kubenswrapper[5007]: I0218 19:59:52.300563 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr"] Feb 18 19:59:52 crc kubenswrapper[5007]: W0218 19:59:52.312114 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde972c5a_b715_4927_a929_4a36ac39ae34.slice/crio-a1edbad23c83af3d432a4a0ff7c84623bde8f258708a5cc981898393bc5650f6 WatchSource:0}: Error finding container a1edbad23c83af3d432a4a0ff7c84623bde8f258708a5cc981898393bc5650f6: Status 404 returned error can't find the container with id a1edbad23c83af3d432a4a0ff7c84623bde8f258708a5cc981898393bc5650f6 Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.272470 5007 generic.go:334] "Generic (PLEG): container finished" podID="de972c5a-b715-4927-a929-4a36ac39ae34" containerID="879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c" exitCode=0 Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.272553 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" event={"ID":"de972c5a-b715-4927-a929-4a36ac39ae34","Type":"ContainerDied","Data":"879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c"} Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.272815 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" event={"ID":"de972c5a-b715-4927-a929-4a36ac39ae34","Type":"ContainerStarted","Data":"a1edbad23c83af3d432a4a0ff7c84623bde8f258708a5cc981898393bc5650f6"} Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.491017 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.491562 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="ceilometer-central-agent" containerID="cri-o://0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25" gracePeriod=30 Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.491680 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="proxy-httpd" containerID="cri-o://e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab" gracePeriod=30 Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.491713 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="sg-core" containerID="cri-o://2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6" gracePeriod=30 Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.491760 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="ceilometer-notification-agent" containerID="cri-o://7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906" gracePeriod=30 Feb 18 19:59:53 crc kubenswrapper[5007]: I0218 19:59:53.880517 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.283717 5007 generic.go:334] "Generic (PLEG): container finished" podID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerID="e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab" exitCode=0 Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.283973 5007 generic.go:334] "Generic (PLEG): container finished" podID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerID="2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6" exitCode=2 Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.284088 5007 generic.go:334] "Generic (PLEG): container finished" podID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerID="0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25" exitCode=0 Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.283963 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerDied","Data":"e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab"} Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.287881 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerDied","Data":"2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6"} Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.288012 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerDied","Data":"0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25"} Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.289649 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" event={"ID":"de972c5a-b715-4927-a929-4a36ac39ae34","Type":"ContainerStarted","Data":"bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523"} Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.289944 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.313942 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.314383 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-log" containerID="cri-o://85d8ad0eea5f65180fc0764131ec553b85b8ed4028918079b2460c2f79179c76" gracePeriod=30 Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.314510 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-api" containerID="cri-o://8004b0ec6536f04393073ee3df1a74a94bcf769a5a962f3df60ba7de300efe20" gracePeriod=30 Feb 18 19:59:54 crc kubenswrapper[5007]: I0218 19:59:54.332785 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" podStartSLOduration=3.332763896 podStartE2EDuration="3.332763896s" podCreationTimestamp="2026-02-18 19:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:54.32744629 +0000 UTC m=+1279.109565814" watchObservedRunningTime="2026-02-18 19:59:54.332763896 +0000 UTC m=+1279.114883420" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.302643 5007 generic.go:334] "Generic (PLEG): container finished" podID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerID="8004b0ec6536f04393073ee3df1a74a94bcf769a5a962f3df60ba7de300efe20" exitCode=0 Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.303013 5007 generic.go:334] "Generic (PLEG): container finished" podID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerID="85d8ad0eea5f65180fc0764131ec553b85b8ed4028918079b2460c2f79179c76" exitCode=143 Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.302738 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad","Type":"ContainerDied","Data":"8004b0ec6536f04393073ee3df1a74a94bcf769a5a962f3df60ba7de300efe20"} Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.303118 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad","Type":"ContainerDied","Data":"85d8ad0eea5f65180fc0764131ec553b85b8ed4028918079b2460c2f79179c76"} Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.639536 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.818300 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt78h\" (UniqueName: \"kubernetes.io/projected/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-kube-api-access-tt78h\") pod \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.818757 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-config-data\") pod \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.818830 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-logs\") pod \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.818861 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-combined-ca-bundle\") pod \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\" (UID: \"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad\") " Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.819293 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-logs" (OuterVolumeSpecName: "logs") pod "fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" (UID: "fad1ce8d-e077-4554-9b1e-6e8eeeb264ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.819685 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.828512 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-kube-api-access-tt78h" (OuterVolumeSpecName: "kube-api-access-tt78h") pod "fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" (UID: "fad1ce8d-e077-4554-9b1e-6e8eeeb264ad"). InnerVolumeSpecName "kube-api-access-tt78h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.851294 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-config-data" (OuterVolumeSpecName: "config-data") pod "fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" (UID: "fad1ce8d-e077-4554-9b1e-6e8eeeb264ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.852232 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" (UID: "fad1ce8d-e077-4554-9b1e-6e8eeeb264ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.923050 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.923160 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:55 crc kubenswrapper[5007]: I0218 19:59:55.923233 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt78h\" (UniqueName: \"kubernetes.io/projected/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad-kube-api-access-tt78h\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.317757 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fad1ce8d-e077-4554-9b1e-6e8eeeb264ad","Type":"ContainerDied","Data":"77e227a37f1d004b49ffc5f00f8a9ad03d1a4670d2d3259fbe22a37006eb0ced"} Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.317802 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.317822 5007 scope.go:117] "RemoveContainer" containerID="8004b0ec6536f04393073ee3df1a74a94bcf769a5a962f3df60ba7de300efe20" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.353518 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.362248 5007 scope.go:117] "RemoveContainer" containerID="85d8ad0eea5f65180fc0764131ec553b85b8ed4028918079b2460c2f79179c76" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.366308 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.378908 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:56 crc kubenswrapper[5007]: E0218 19:59:56.379648 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-log" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.379671 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-log" Feb 18 19:59:56 crc kubenswrapper[5007]: E0218 19:59:56.379687 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-api" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.379693 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-api" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.379872 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-api" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.379898 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" containerName="nova-api-log" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.381044 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.383318 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.383983 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.391368 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.424716 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.537639 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-config-data\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.537698 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.537757 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-public-tls-certs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.537797 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjtb5\" (UniqueName: \"kubernetes.io/projected/7c8def50-80aa-49ff-9d1e-241b38d2a401-kube-api-access-tjtb5\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.537885 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.537908 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c8def50-80aa-49ff-9d1e-241b38d2a401-logs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.639562 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.639811 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c8def50-80aa-49ff-9d1e-241b38d2a401-logs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.639834 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-config-data\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.639870 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.640587 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-public-tls-certs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.640648 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjtb5\" (UniqueName: \"kubernetes.io/projected/7c8def50-80aa-49ff-9d1e-241b38d2a401-kube-api-access-tjtb5\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.640678 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c8def50-80aa-49ff-9d1e-241b38d2a401-logs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.644467 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.644843 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.647892 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-public-tls-certs\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.650075 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-config-data\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.659604 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjtb5\" (UniqueName: \"kubernetes.io/projected/7c8def50-80aa-49ff-9d1e-241b38d2a401-kube-api-access-tjtb5\") pod \"nova-api-0\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " pod="openstack/nova-api-0" Feb 18 19:59:56 crc kubenswrapper[5007]: I0218 19:59:56.707975 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.246162 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.297574 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.332179 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c8def50-80aa-49ff-9d1e-241b38d2a401","Type":"ContainerStarted","Data":"c70f1fe0c08095dc27ade4394573e637dda3dab2df34f20c1011c554297e77c4"} Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.334212 5007 generic.go:334] "Generic (PLEG): container finished" podID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerID="7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906" exitCode=0 Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.334243 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerDied","Data":"7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906"} Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.334259 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ba44459-02df-4479-a9bb-ad0a9864e6f6","Type":"ContainerDied","Data":"364ebeb9debb2916017b170bb9ec002b43e57df0376ed838c74b7b6671cbea85"} Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.334280 5007 scope.go:117] "RemoveContainer" containerID="e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.334446 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.359749 5007 scope.go:117] "RemoveContainer" containerID="2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.393907 5007 scope.go:117] "RemoveContainer" containerID="7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.424603 5007 scope.go:117] "RemoveContainer" containerID="0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.450169 5007 scope.go:117] "RemoveContainer" containerID="e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab" Feb 18 19:59:57 crc kubenswrapper[5007]: E0218 19:59:57.455081 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab\": container with ID starting with e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab not found: ID does not exist" containerID="e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.455133 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab"} err="failed to get container status \"e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab\": rpc error: code = NotFound desc = could not find container \"e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab\": container with ID starting with e30ed485955cc65f2edcf4a8dee591b27a48cc7a87e240a2eea99438ea048eab not found: ID does not exist" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.455160 5007 scope.go:117] "RemoveContainer" containerID="2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6" Feb 18 19:59:57 crc kubenswrapper[5007]: E0218 19:59:57.455788 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6\": container with ID starting with 2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6 not found: ID does not exist" containerID="2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.455822 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6"} err="failed to get container status \"2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6\": rpc error: code = NotFound desc = could not find container \"2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6\": container with ID starting with 2b4ad140dfaf667a1fe1442a19e7de99938aef8a7ff37e2a89b2aafa6645d3a6 not found: ID does not exist" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.455848 5007 scope.go:117] "RemoveContainer" containerID="7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906" Feb 18 19:59:57 crc kubenswrapper[5007]: E0218 19:59:57.456148 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906\": container with ID starting with 7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906 not found: ID does not exist" containerID="7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.456177 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906"} err="failed to get container status \"7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906\": rpc error: code = NotFound desc = could not find container \"7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906\": container with ID starting with 7373e430515782f99942596a44dcbb6fac14604569b93e428d843e0bf9643906 not found: ID does not exist" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.456199 5007 scope.go:117] "RemoveContainer" containerID="0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25" Feb 18 19:59:57 crc kubenswrapper[5007]: E0218 19:59:57.456744 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25\": container with ID starting with 0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25 not found: ID does not exist" containerID="0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.457013 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25"} err="failed to get container status \"0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25\": rpc error: code = NotFound desc = could not find container \"0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25\": container with ID starting with 0dcfd23bc6399f8ef5cd1f4a28d5ab3daac581f4d27d2c463cf009653984be25 not found: ID does not exist" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.474106 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-config-data\") pod \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.474192 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-scripts\") pod \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.476733 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dprcr\" (UniqueName: \"kubernetes.io/projected/1ba44459-02df-4479-a9bb-ad0a9864e6f6-kube-api-access-dprcr\") pod \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.476866 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-sg-core-conf-yaml\") pod \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.476940 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-run-httpd\") pod \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.476966 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-log-httpd\") pod \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.477031 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-ceilometer-tls-certs\") pod \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.477058 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-combined-ca-bundle\") pod \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\" (UID: \"1ba44459-02df-4479-a9bb-ad0a9864e6f6\") " Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.477897 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-scripts" (OuterVolumeSpecName: "scripts") pod "1ba44459-02df-4479-a9bb-ad0a9864e6f6" (UID: "1ba44459-02df-4479-a9bb-ad0a9864e6f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.478336 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1ba44459-02df-4479-a9bb-ad0a9864e6f6" (UID: "1ba44459-02df-4479-a9bb-ad0a9864e6f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.478862 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1ba44459-02df-4479-a9bb-ad0a9864e6f6" (UID: "1ba44459-02df-4479-a9bb-ad0a9864e6f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.483418 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba44459-02df-4479-a9bb-ad0a9864e6f6-kube-api-access-dprcr" (OuterVolumeSpecName: "kube-api-access-dprcr") pod "1ba44459-02df-4479-a9bb-ad0a9864e6f6" (UID: "1ba44459-02df-4479-a9bb-ad0a9864e6f6"). InnerVolumeSpecName "kube-api-access-dprcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.516270 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1ba44459-02df-4479-a9bb-ad0a9864e6f6" (UID: "1ba44459-02df-4479-a9bb-ad0a9864e6f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.555509 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1ba44459-02df-4479-a9bb-ad0a9864e6f6" (UID: "1ba44459-02df-4479-a9bb-ad0a9864e6f6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.568611 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ba44459-02df-4479-a9bb-ad0a9864e6f6" (UID: "1ba44459-02df-4479-a9bb-ad0a9864e6f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.579412 5007 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.579619 5007 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ba44459-02df-4479-a9bb-ad0a9864e6f6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.579700 5007 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.579758 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.579823 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.579883 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dprcr\" (UniqueName: \"kubernetes.io/projected/1ba44459-02df-4479-a9bb-ad0a9864e6f6-kube-api-access-dprcr\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.579944 5007 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.605691 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-config-data" (OuterVolumeSpecName: "config-data") pod "1ba44459-02df-4479-a9bb-ad0a9864e6f6" (UID: "1ba44459-02df-4479-a9bb-ad0a9864e6f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.681664 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba44459-02df-4479-a9bb-ad0a9864e6f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.733969 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.744942 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.754874 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:57 crc kubenswrapper[5007]: E0218 19:59:57.755272 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="sg-core" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.755286 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="sg-core" Feb 18 19:59:57 crc kubenswrapper[5007]: E0218 19:59:57.755299 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="ceilometer-central-agent" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.755308 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="ceilometer-central-agent" Feb 18 19:59:57 crc kubenswrapper[5007]: E0218 19:59:57.755331 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="ceilometer-notification-agent" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.755336 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="ceilometer-notification-agent" Feb 18 19:59:57 crc kubenswrapper[5007]: E0218 19:59:57.755356 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="proxy-httpd" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.755362 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="proxy-httpd" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.755584 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="sg-core" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.755610 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="ceilometer-central-agent" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.755625 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="ceilometer-notification-agent" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.755643 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" containerName="proxy-httpd" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.757311 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.761051 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.761144 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.761214 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.777078 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.784386 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-config-data\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.784506 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.784533 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80d7f5e4-e1df-4269-abae-40d443d7e962-log-httpd\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.784626 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.784680 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-scripts\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.784712 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80d7f5e4-e1df-4269-abae-40d443d7e962-run-httpd\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.784750 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.784781 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6lv9\" (UniqueName: \"kubernetes.io/projected/80d7f5e4-e1df-4269-abae-40d443d7e962-kube-api-access-j6lv9\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.886072 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-config-data\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.886129 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.886162 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80d7f5e4-e1df-4269-abae-40d443d7e962-log-httpd\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.886288 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.886359 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-scripts\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.886410 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80d7f5e4-e1df-4269-abae-40d443d7e962-run-httpd\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.886489 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.886524 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6lv9\" (UniqueName: \"kubernetes.io/projected/80d7f5e4-e1df-4269-abae-40d443d7e962-kube-api-access-j6lv9\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.887139 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80d7f5e4-e1df-4269-abae-40d443d7e962-run-httpd\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.887485 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80d7f5e4-e1df-4269-abae-40d443d7e962-log-httpd\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.892059 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.894391 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.899265 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-config-data\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.900328 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-scripts\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.906370 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/80d7f5e4-e1df-4269-abae-40d443d7e962-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.916761 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6lv9\" (UniqueName: \"kubernetes.io/projected/80d7f5e4-e1df-4269-abae-40d443d7e962-kube-api-access-j6lv9\") pod \"ceilometer-0\" (UID: \"80d7f5e4-e1df-4269-abae-40d443d7e962\") " pod="openstack/ceilometer-0" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.922550 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba44459-02df-4479-a9bb-ad0a9864e6f6" path="/var/lib/kubelet/pods/1ba44459-02df-4479-a9bb-ad0a9864e6f6/volumes" Feb 18 19:59:57 crc kubenswrapper[5007]: I0218 19:59:57.923622 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad1ce8d-e077-4554-9b1e-6e8eeeb264ad" path="/var/lib/kubelet/pods/fad1ce8d-e077-4554-9b1e-6e8eeeb264ad/volumes" Feb 18 19:59:58 crc kubenswrapper[5007]: I0218 19:59:58.071088 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:59:58 crc kubenswrapper[5007]: I0218 19:59:58.345273 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c8def50-80aa-49ff-9d1e-241b38d2a401","Type":"ContainerStarted","Data":"9275bada7258c13d9fe560a69089a147ef4edd7e105e36cea578daaf6d76b74e"} Feb 18 19:59:58 crc kubenswrapper[5007]: I0218 19:59:58.345678 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c8def50-80aa-49ff-9d1e-241b38d2a401","Type":"ContainerStarted","Data":"6acfd56f85677a05c749f2698b52b51e300134e487c2901339a9b3d159fab2e7"} Feb 18 19:59:58 crc kubenswrapper[5007]: I0218 19:59:58.371921 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.371892262 podStartE2EDuration="2.371892262s" podCreationTimestamp="2026-02-18 19:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:59:58.364989806 +0000 UTC m=+1283.147109360" watchObservedRunningTime="2026-02-18 19:59:58.371892262 +0000 UTC m=+1283.154011786" Feb 18 19:59:58 crc kubenswrapper[5007]: W0218 19:59:58.551414 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80d7f5e4_e1df_4269_abae_40d443d7e962.slice/crio-e06e08d41f10f28f45d479d7daee93b7f74e5468272ea6aa59a333d4091b029d WatchSource:0}: Error finding container e06e08d41f10f28f45d479d7daee93b7f74e5468272ea6aa59a333d4091b029d: Status 404 returned error can't find the container with id e06e08d41f10f28f45d479d7daee93b7f74e5468272ea6aa59a333d4091b029d Feb 18 19:59:58 crc kubenswrapper[5007]: I0218 19:59:58.554840 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:59:58 crc kubenswrapper[5007]: I0218 19:59:58.883722 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:58 crc kubenswrapper[5007]: I0218 19:59:58.917498 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.370888 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80d7f5e4-e1df-4269-abae-40d443d7e962","Type":"ContainerStarted","Data":"b18cc38a398e6338a1b8e3a8814a5f16b407caf22a07ca32780394e79caaf9bb"} Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.372873 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80d7f5e4-e1df-4269-abae-40d443d7e962","Type":"ContainerStarted","Data":"16fd13c1f5876cf42132ec7b479a2c302ad7e1c1eca2000216c40b1c7a56083d"} Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.372962 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80d7f5e4-e1df-4269-abae-40d443d7e962","Type":"ContainerStarted","Data":"e06e08d41f10f28f45d479d7daee93b7f74e5468272ea6aa59a333d4091b029d"} Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.391291 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.592243 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-d7zq4"] Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.597566 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.600369 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.600723 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.622594 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-d7zq4"] Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.725773 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-config-data\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.726128 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-scripts\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.726178 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.726321 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wtkq\" (UniqueName: \"kubernetes.io/projected/9e27342a-023c-465c-b6d6-32dadd9343da-kube-api-access-2wtkq\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.832071 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.832237 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wtkq\" (UniqueName: \"kubernetes.io/projected/9e27342a-023c-465c-b6d6-32dadd9343da-kube-api-access-2wtkq\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.832343 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-config-data\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.832389 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-scripts\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.845712 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-config-data\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.846507 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-scripts\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.851348 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.868726 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wtkq\" (UniqueName: \"kubernetes.io/projected/9e27342a-023c-465c-b6d6-32dadd9343da-kube-api-access-2wtkq\") pod \"nova-cell1-cell-mapping-d7zq4\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 19:59:59 crc kubenswrapper[5007]: I0218 19:59:59.927461 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.153101 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b"] Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.155106 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.158976 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.159112 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.169684 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b"] Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.241157 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a092a1a2-0a15-4f13-93b0-ec89f14b9474-secret-volume\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.241273 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578gp\" (UniqueName: \"kubernetes.io/projected/a092a1a2-0a15-4f13-93b0-ec89f14b9474-kube-api-access-578gp\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.241387 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a092a1a2-0a15-4f13-93b0-ec89f14b9474-config-volume\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.342763 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578gp\" (UniqueName: \"kubernetes.io/projected/a092a1a2-0a15-4f13-93b0-ec89f14b9474-kube-api-access-578gp\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.342905 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a092a1a2-0a15-4f13-93b0-ec89f14b9474-config-volume\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.343017 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a092a1a2-0a15-4f13-93b0-ec89f14b9474-secret-volume\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.343800 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a092a1a2-0a15-4f13-93b0-ec89f14b9474-config-volume\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.349013 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a092a1a2-0a15-4f13-93b0-ec89f14b9474-secret-volume\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.363542 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578gp\" (UniqueName: \"kubernetes.io/projected/a092a1a2-0a15-4f13-93b0-ec89f14b9474-kube-api-access-578gp\") pod \"collect-profiles-29524080-s9l7b\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.380896 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80d7f5e4-e1df-4269-abae-40d443d7e962","Type":"ContainerStarted","Data":"a2b2f42b2bf9d8ab818c3fa85dc7f97b600faa7c77378b19377159f1425f788b"} Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.441790 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-d7zq4"] Feb 18 20:00:00 crc kubenswrapper[5007]: I0218 20:00:00.488027 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:01 crc kubenswrapper[5007]: I0218 20:00:01.438774 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d7zq4" event={"ID":"9e27342a-023c-465c-b6d6-32dadd9343da","Type":"ContainerStarted","Data":"5977faa6b021fce8c7cc7fdb6f358ea50f9cde97f3edffa25ed6c6c72f98e844"} Feb 18 20:00:01 crc kubenswrapper[5007]: I0218 20:00:01.439017 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d7zq4" event={"ID":"9e27342a-023c-465c-b6d6-32dadd9343da","Type":"ContainerStarted","Data":"fed4606f800c1e5b6e54942f20b7d85a137d17f4dcec68c22f6a3d4a5838c829"} Feb 18 20:00:01 crc kubenswrapper[5007]: I0218 20:00:01.459177 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-d7zq4" podStartSLOduration=2.459124124 podStartE2EDuration="2.459124124s" podCreationTimestamp="2026-02-18 19:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:01.45722314 +0000 UTC m=+1286.239342664" watchObservedRunningTime="2026-02-18 20:00:01.459124124 +0000 UTC m=+1286.241243648" Feb 18 20:00:01 crc kubenswrapper[5007]: I0218 20:00:01.714310 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b"] Feb 18 20:00:01 crc kubenswrapper[5007]: I0218 20:00:01.810608 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 20:00:01 crc kubenswrapper[5007]: I0218 20:00:01.883817 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs"] Feb 18 20:00:01 crc kubenswrapper[5007]: I0218 20:00:01.884153 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" podUID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" containerName="dnsmasq-dns" containerID="cri-o://d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be" gracePeriod=10 Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.346118 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.385311 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42hd4\" (UniqueName: \"kubernetes.io/projected/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-kube-api-access-42hd4\") pod \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.385363 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-swift-storage-0\") pod \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.385416 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-svc\") pod \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.385485 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-sb\") pod \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.385641 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-config\") pod \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.385709 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-nb\") pod \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\" (UID: \"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236\") " Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.414845 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-kube-api-access-42hd4" (OuterVolumeSpecName: "kube-api-access-42hd4") pod "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" (UID: "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236"). InnerVolumeSpecName "kube-api-access-42hd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.438324 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-config" (OuterVolumeSpecName: "config") pod "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" (UID: "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.458522 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" (UID: "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.469685 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80d7f5e4-e1df-4269-abae-40d443d7e962","Type":"ContainerStarted","Data":"c5cc7f8ffb31687952be8f0acd92909f4b8c4cf139f9aa6f49a3af0891825fea"} Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.470333 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.472110 5007 generic.go:334] "Generic (PLEG): container finished" podID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" containerID="d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be" exitCode=0 Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.472164 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.472196 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" event={"ID":"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236","Type":"ContainerDied","Data":"d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be"} Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.472225 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs" event={"ID":"1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236","Type":"ContainerDied","Data":"333b9f49d250873fb3676cd76fab9780772f547c479642cb39962b2f60db4cf9"} Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.472242 5007 scope.go:117] "RemoveContainer" containerID="d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.473392 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" (UID: "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.474501 5007 generic.go:334] "Generic (PLEG): container finished" podID="a092a1a2-0a15-4f13-93b0-ec89f14b9474" containerID="4d29d61ad07dc8aa7f1655225a7f93cf7d6f91ed57e04c1a414db27394d28e8e" exitCode=0 Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.474576 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" event={"ID":"a092a1a2-0a15-4f13-93b0-ec89f14b9474","Type":"ContainerDied","Data":"4d29d61ad07dc8aa7f1655225a7f93cf7d6f91ed57e04c1a414db27394d28e8e"} Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.474596 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" event={"ID":"a092a1a2-0a15-4f13-93b0-ec89f14b9474","Type":"ContainerStarted","Data":"21487a2c2eb42b3af2f88afc2550e9c39d098154430a2fe2df35aa62d52eb4fc"} Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.479629 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" (UID: "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.480159 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" (UID: "1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.488503 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42hd4\" (UniqueName: \"kubernetes.io/projected/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-kube-api-access-42hd4\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.488528 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.488538 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.488548 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.488563 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.488570 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.496198 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.198881451 podStartE2EDuration="5.496173594s" podCreationTimestamp="2026-02-18 19:59:57 +0000 UTC" firstStartedPulling="2026-02-18 19:59:58.553668949 +0000 UTC m=+1283.335788473" lastFinishedPulling="2026-02-18 20:00:01.850961092 +0000 UTC m=+1286.633080616" observedRunningTime="2026-02-18 20:00:02.487083881 +0000 UTC m=+1287.269203405" watchObservedRunningTime="2026-02-18 20:00:02.496173594 +0000 UTC m=+1287.278293118" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.500647 5007 scope.go:117] "RemoveContainer" containerID="f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.519903 5007 scope.go:117] "RemoveContainer" containerID="d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be" Feb 18 20:00:02 crc kubenswrapper[5007]: E0218 20:00:02.520258 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be\": container with ID starting with d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be not found: ID does not exist" containerID="d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.520299 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be"} err="failed to get container status \"d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be\": rpc error: code = NotFound desc = could not find container \"d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be\": container with ID starting with d87a183b7389e15369c3b991ece9cf976612c9fcf464f096a101eeb86f5f19be not found: ID does not exist" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.520323 5007 scope.go:117] "RemoveContainer" containerID="f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224" Feb 18 20:00:02 crc kubenswrapper[5007]: E0218 20:00:02.520686 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224\": container with ID starting with f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224 not found: ID does not exist" containerID="f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.520879 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224"} err="failed to get container status \"f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224\": rpc error: code = NotFound desc = could not find container \"f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224\": container with ID starting with f14a5f48afa99d6c17db57bc6dceb5e7b691052c6f6db8a42ae874a7510a1224 not found: ID does not exist" Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.838670 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs"] Feb 18 20:00:02 crc kubenswrapper[5007]: I0218 20:00:02.851667 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fb5c7bcd5-5c8fs"] Feb 18 20:00:03 crc kubenswrapper[5007]: I0218 20:00:03.900980 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:03 crc kubenswrapper[5007]: I0218 20:00:03.921210 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" path="/var/lib/kubelet/pods/1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236/volumes" Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.025545 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a092a1a2-0a15-4f13-93b0-ec89f14b9474-secret-volume\") pod \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.025594 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578gp\" (UniqueName: \"kubernetes.io/projected/a092a1a2-0a15-4f13-93b0-ec89f14b9474-kube-api-access-578gp\") pod \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.025721 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a092a1a2-0a15-4f13-93b0-ec89f14b9474-config-volume\") pod \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\" (UID: \"a092a1a2-0a15-4f13-93b0-ec89f14b9474\") " Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.026858 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a092a1a2-0a15-4f13-93b0-ec89f14b9474-config-volume" (OuterVolumeSpecName: "config-volume") pod "a092a1a2-0a15-4f13-93b0-ec89f14b9474" (UID: "a092a1a2-0a15-4f13-93b0-ec89f14b9474"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.031362 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a092a1a2-0a15-4f13-93b0-ec89f14b9474-kube-api-access-578gp" (OuterVolumeSpecName: "kube-api-access-578gp") pod "a092a1a2-0a15-4f13-93b0-ec89f14b9474" (UID: "a092a1a2-0a15-4f13-93b0-ec89f14b9474"). InnerVolumeSpecName "kube-api-access-578gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.032531 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a092a1a2-0a15-4f13-93b0-ec89f14b9474-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a092a1a2-0a15-4f13-93b0-ec89f14b9474" (UID: "a092a1a2-0a15-4f13-93b0-ec89f14b9474"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.128341 5007 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a092a1a2-0a15-4f13-93b0-ec89f14b9474-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.128372 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578gp\" (UniqueName: \"kubernetes.io/projected/a092a1a2-0a15-4f13-93b0-ec89f14b9474-kube-api-access-578gp\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.128382 5007 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a092a1a2-0a15-4f13-93b0-ec89f14b9474-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.502305 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" event={"ID":"a092a1a2-0a15-4f13-93b0-ec89f14b9474","Type":"ContainerDied","Data":"21487a2c2eb42b3af2f88afc2550e9c39d098154430a2fe2df35aa62d52eb4fc"} Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.502353 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21487a2c2eb42b3af2f88afc2550e9c39d098154430a2fe2df35aa62d52eb4fc" Feb 18 20:00:04 crc kubenswrapper[5007]: I0218 20:00:04.502419 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b" Feb 18 20:00:06 crc kubenswrapper[5007]: I0218 20:00:06.532164 5007 generic.go:334] "Generic (PLEG): container finished" podID="9e27342a-023c-465c-b6d6-32dadd9343da" containerID="5977faa6b021fce8c7cc7fdb6f358ea50f9cde97f3edffa25ed6c6c72f98e844" exitCode=0 Feb 18 20:00:06 crc kubenswrapper[5007]: I0218 20:00:06.532262 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d7zq4" event={"ID":"9e27342a-023c-465c-b6d6-32dadd9343da","Type":"ContainerDied","Data":"5977faa6b021fce8c7cc7fdb6f358ea50f9cde97f3edffa25ed6c6c72f98e844"} Feb 18 20:00:06 crc kubenswrapper[5007]: I0218 20:00:06.709032 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 20:00:06 crc kubenswrapper[5007]: I0218 20:00:06.709073 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 20:00:07 crc kubenswrapper[5007]: I0218 20:00:07.724595 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 20:00:07 crc kubenswrapper[5007]: I0218 20:00:07.724640 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 20:00:07 crc kubenswrapper[5007]: I0218 20:00:07.972754 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.022219 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-combined-ca-bundle\") pod \"9e27342a-023c-465c-b6d6-32dadd9343da\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.022347 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wtkq\" (UniqueName: \"kubernetes.io/projected/9e27342a-023c-465c-b6d6-32dadd9343da-kube-api-access-2wtkq\") pod \"9e27342a-023c-465c-b6d6-32dadd9343da\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.022405 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-config-data\") pod \"9e27342a-023c-465c-b6d6-32dadd9343da\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.022468 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-scripts\") pod \"9e27342a-023c-465c-b6d6-32dadd9343da\" (UID: \"9e27342a-023c-465c-b6d6-32dadd9343da\") " Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.033709 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e27342a-023c-465c-b6d6-32dadd9343da-kube-api-access-2wtkq" (OuterVolumeSpecName: "kube-api-access-2wtkq") pod "9e27342a-023c-465c-b6d6-32dadd9343da" (UID: "9e27342a-023c-465c-b6d6-32dadd9343da"). InnerVolumeSpecName "kube-api-access-2wtkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.033907 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-scripts" (OuterVolumeSpecName: "scripts") pod "9e27342a-023c-465c-b6d6-32dadd9343da" (UID: "9e27342a-023c-465c-b6d6-32dadd9343da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.060790 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e27342a-023c-465c-b6d6-32dadd9343da" (UID: "9e27342a-023c-465c-b6d6-32dadd9343da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.071945 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-config-data" (OuterVolumeSpecName: "config-data") pod "9e27342a-023c-465c-b6d6-32dadd9343da" (UID: "9e27342a-023c-465c-b6d6-32dadd9343da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.124704 5007 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.125042 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.125061 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wtkq\" (UniqueName: \"kubernetes.io/projected/9e27342a-023c-465c-b6d6-32dadd9343da-kube-api-access-2wtkq\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.125074 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27342a-023c-465c-b6d6-32dadd9343da-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.555765 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d7zq4" event={"ID":"9e27342a-023c-465c-b6d6-32dadd9343da","Type":"ContainerDied","Data":"fed4606f800c1e5b6e54942f20b7d85a137d17f4dcec68c22f6a3d4a5838c829"} Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.555812 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fed4606f800c1e5b6e54942f20b7d85a137d17f4dcec68c22f6a3d4a5838c829" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.555888 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d7zq4" Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.785950 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.786157 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-log" containerID="cri-o://6acfd56f85677a05c749f2698b52b51e300134e487c2901339a9b3d159fab2e7" gracePeriod=30 Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.786237 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-api" containerID="cri-o://9275bada7258c13d9fe560a69089a147ef4edd7e105e36cea578daaf6d76b74e" gracePeriod=30 Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.845478 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.845906 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="587de3e4-71d0-4a0f-8312-01fcc185dd4f" containerName="nova-scheduler-scheduler" containerID="cri-o://edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189" gracePeriod=30 Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.867316 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.867601 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-log" containerID="cri-o://b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75" gracePeriod=30 Feb 18 20:00:08 crc kubenswrapper[5007]: I0218 20:00:08.867846 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-metadata" containerID="cri-o://3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b" gracePeriod=30 Feb 18 20:00:09 crc kubenswrapper[5007]: I0218 20:00:09.568824 5007 generic.go:334] "Generic (PLEG): container finished" podID="9773272a-428b-4151-9c85-ce97c98a467e" containerID="b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75" exitCode=143 Feb 18 20:00:09 crc kubenswrapper[5007]: I0218 20:00:09.568904 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9773272a-428b-4151-9c85-ce97c98a467e","Type":"ContainerDied","Data":"b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75"} Feb 18 20:00:09 crc kubenswrapper[5007]: I0218 20:00:09.570978 5007 generic.go:334] "Generic (PLEG): container finished" podID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerID="6acfd56f85677a05c749f2698b52b51e300134e487c2901339a9b3d159fab2e7" exitCode=143 Feb 18 20:00:09 crc kubenswrapper[5007]: I0218 20:00:09.571020 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c8def50-80aa-49ff-9d1e-241b38d2a401","Type":"ContainerDied","Data":"6acfd56f85677a05c749f2698b52b51e300134e487c2901339a9b3d159fab2e7"} Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.170614 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.280478 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-config-data\") pod \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.280545 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m57cv\" (UniqueName: \"kubernetes.io/projected/587de3e4-71d0-4a0f-8312-01fcc185dd4f-kube-api-access-m57cv\") pod \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.280610 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-combined-ca-bundle\") pod \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\" (UID: \"587de3e4-71d0-4a0f-8312-01fcc185dd4f\") " Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.287279 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587de3e4-71d0-4a0f-8312-01fcc185dd4f-kube-api-access-m57cv" (OuterVolumeSpecName: "kube-api-access-m57cv") pod "587de3e4-71d0-4a0f-8312-01fcc185dd4f" (UID: "587de3e4-71d0-4a0f-8312-01fcc185dd4f"). InnerVolumeSpecName "kube-api-access-m57cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.311388 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-config-data" (OuterVolumeSpecName: "config-data") pod "587de3e4-71d0-4a0f-8312-01fcc185dd4f" (UID: "587de3e4-71d0-4a0f-8312-01fcc185dd4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.320625 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "587de3e4-71d0-4a0f-8312-01fcc185dd4f" (UID: "587de3e4-71d0-4a0f-8312-01fcc185dd4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.382797 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.382831 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m57cv\" (UniqueName: \"kubernetes.io/projected/587de3e4-71d0-4a0f-8312-01fcc185dd4f-kube-api-access-m57cv\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.382843 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/587de3e4-71d0-4a0f-8312-01fcc185dd4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.387891 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.483901 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcz2j\" (UniqueName: \"kubernetes.io/projected/9773272a-428b-4151-9c85-ce97c98a467e-kube-api-access-gcz2j\") pod \"9773272a-428b-4151-9c85-ce97c98a467e\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.483969 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-combined-ca-bundle\") pod \"9773272a-428b-4151-9c85-ce97c98a467e\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.484038 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-config-data\") pod \"9773272a-428b-4151-9c85-ce97c98a467e\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.484066 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-nova-metadata-tls-certs\") pod \"9773272a-428b-4151-9c85-ce97c98a467e\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.484151 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9773272a-428b-4151-9c85-ce97c98a467e-logs\") pod \"9773272a-428b-4151-9c85-ce97c98a467e\" (UID: \"9773272a-428b-4151-9c85-ce97c98a467e\") " Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.485005 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9773272a-428b-4151-9c85-ce97c98a467e-logs" (OuterVolumeSpecName: "logs") pod "9773272a-428b-4151-9c85-ce97c98a467e" (UID: "9773272a-428b-4151-9c85-ce97c98a467e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.487810 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9773272a-428b-4151-9c85-ce97c98a467e-kube-api-access-gcz2j" (OuterVolumeSpecName: "kube-api-access-gcz2j") pod "9773272a-428b-4151-9c85-ce97c98a467e" (UID: "9773272a-428b-4151-9c85-ce97c98a467e"). InnerVolumeSpecName "kube-api-access-gcz2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.533487 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9773272a-428b-4151-9c85-ce97c98a467e" (UID: "9773272a-428b-4151-9c85-ce97c98a467e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.548597 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-config-data" (OuterVolumeSpecName: "config-data") pod "9773272a-428b-4151-9c85-ce97c98a467e" (UID: "9773272a-428b-4151-9c85-ce97c98a467e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.552614 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9773272a-428b-4151-9c85-ce97c98a467e" (UID: "9773272a-428b-4151-9c85-ce97c98a467e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.586180 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcz2j\" (UniqueName: \"kubernetes.io/projected/9773272a-428b-4151-9c85-ce97c98a467e-kube-api-access-gcz2j\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.586221 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.586230 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.586238 5007 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9773272a-428b-4151-9c85-ce97c98a467e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.586249 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9773272a-428b-4151-9c85-ce97c98a467e-logs\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.589051 5007 generic.go:334] "Generic (PLEG): container finished" podID="9773272a-428b-4151-9c85-ce97c98a467e" containerID="3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b" exitCode=0 Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.589095 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9773272a-428b-4151-9c85-ce97c98a467e","Type":"ContainerDied","Data":"3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b"} Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.589154 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9773272a-428b-4151-9c85-ce97c98a467e","Type":"ContainerDied","Data":"8460680d7c9f5b2ee2006e8a7bd88141886bf9d05820c1b18691f26f49d0ef38"} Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.589177 5007 scope.go:117] "RemoveContainer" containerID="3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.589113 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.590944 5007 generic.go:334] "Generic (PLEG): container finished" podID="587de3e4-71d0-4a0f-8312-01fcc185dd4f" containerID="edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189" exitCode=0 Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.590988 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"587de3e4-71d0-4a0f-8312-01fcc185dd4f","Type":"ContainerDied","Data":"edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189"} Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.591013 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"587de3e4-71d0-4a0f-8312-01fcc185dd4f","Type":"ContainerDied","Data":"4aa7b63fd06d41845609696ca3415e36e5bb640514c8ec287c266067e677f6c4"} Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.591059 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.609088 5007 scope.go:117] "RemoveContainer" containerID="b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.633583 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.645018 5007 scope.go:117] "RemoveContainer" containerID="3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.645618 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b\": container with ID starting with 3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b not found: ID does not exist" containerID="3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.645650 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b"} err="failed to get container status \"3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b\": rpc error: code = NotFound desc = could not find container \"3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b\": container with ID starting with 3c00aa6490e3bbb1622c6a2c5d344e05d020f8a928dd3d3bfa10125fd5e6266b not found: ID does not exist" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.645673 5007 scope.go:117] "RemoveContainer" containerID="b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.645866 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75\": container with ID starting with b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75 not found: ID does not exist" containerID="b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.645888 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75"} err="failed to get container status \"b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75\": rpc error: code = NotFound desc = could not find container \"b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75\": container with ID starting with b0b5e2986674ed58763966db5a35705fe052e3b4427a443c3b25b80d7e7f2d75 not found: ID does not exist" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.645902 5007 scope.go:117] "RemoveContainer" containerID="edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.651275 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.660460 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.668937 5007 scope.go:117] "RemoveContainer" containerID="edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.679476 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189\": container with ID starting with edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189 not found: ID does not exist" containerID="edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.679671 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189"} err="failed to get container status \"edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189\": rpc error: code = NotFound desc = could not find container \"edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189\": container with ID starting with edeea7008ea8014d3fdb3fb98bd542dbdc7f6b7d922ef5dcb6bd86eb0db4e189 not found: ID does not exist" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.679792 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.688574 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.688989 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587de3e4-71d0-4a0f-8312-01fcc185dd4f" containerName="nova-scheduler-scheduler" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689005 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="587de3e4-71d0-4a0f-8312-01fcc185dd4f" containerName="nova-scheduler-scheduler" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.689022 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-log" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689030 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-log" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.689045 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" containerName="init" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689051 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" containerName="init" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.689063 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e27342a-023c-465c-b6d6-32dadd9343da" containerName="nova-manage" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689069 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e27342a-023c-465c-b6d6-32dadd9343da" containerName="nova-manage" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.689081 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-metadata" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689087 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-metadata" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.689094 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a092a1a2-0a15-4f13-93b0-ec89f14b9474" containerName="collect-profiles" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689100 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a092a1a2-0a15-4f13-93b0-ec89f14b9474" containerName="collect-profiles" Feb 18 20:00:10 crc kubenswrapper[5007]: E0218 20:00:10.689126 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" containerName="dnsmasq-dns" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689131 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" containerName="dnsmasq-dns" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689301 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b1ecf9a-c0b9-4577-9298-b1ea7fe5c236" containerName="dnsmasq-dns" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689314 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e27342a-023c-465c-b6d6-32dadd9343da" containerName="nova-manage" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689321 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a092a1a2-0a15-4f13-93b0-ec89f14b9474" containerName="collect-profiles" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689338 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-log" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689349 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="9773272a-428b-4151-9c85-ce97c98a467e" containerName="nova-metadata-metadata" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.689365 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="587de3e4-71d0-4a0f-8312-01fcc185dd4f" containerName="nova-scheduler-scheduler" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.690011 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.692119 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.703462 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.717501 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.719367 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.721964 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.722212 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.724204 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.789753 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3175b1c-fe13-4e4f-b5ce-748a69d02913-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.789822 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.789851 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-config-data\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.789992 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b15eae-6cba-40db-b2bc-a97607ac24ca-logs\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.790088 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjwz\" (UniqueName: \"kubernetes.io/projected/03b15eae-6cba-40db-b2bc-a97607ac24ca-kube-api-access-2tjwz\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.790161 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbb6\" (UniqueName: \"kubernetes.io/projected/f3175b1c-fe13-4e4f-b5ce-748a69d02913-kube-api-access-4tbb6\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.790223 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.790354 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3175b1c-fe13-4e4f-b5ce-748a69d02913-config-data\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.892893 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3175b1c-fe13-4e4f-b5ce-748a69d02913-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.892968 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.893007 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-config-data\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.893036 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b15eae-6cba-40db-b2bc-a97607ac24ca-logs\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.893075 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjwz\" (UniqueName: \"kubernetes.io/projected/03b15eae-6cba-40db-b2bc-a97607ac24ca-kube-api-access-2tjwz\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.893116 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbb6\" (UniqueName: \"kubernetes.io/projected/f3175b1c-fe13-4e4f-b5ce-748a69d02913-kube-api-access-4tbb6\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.893156 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.893221 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3175b1c-fe13-4e4f-b5ce-748a69d02913-config-data\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.894320 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b15eae-6cba-40db-b2bc-a97607ac24ca-logs\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.898403 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.898403 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3175b1c-fe13-4e4f-b5ce-748a69d02913-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.898735 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-config-data\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.898838 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03b15eae-6cba-40db-b2bc-a97607ac24ca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.899261 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3175b1c-fe13-4e4f-b5ce-748a69d02913-config-data\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.911001 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbb6\" (UniqueName: \"kubernetes.io/projected/f3175b1c-fe13-4e4f-b5ce-748a69d02913-kube-api-access-4tbb6\") pod \"nova-scheduler-0\" (UID: \"f3175b1c-fe13-4e4f-b5ce-748a69d02913\") " pod="openstack/nova-scheduler-0" Feb 18 20:00:10 crc kubenswrapper[5007]: I0218 20:00:10.911989 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjwz\" (UniqueName: \"kubernetes.io/projected/03b15eae-6cba-40db-b2bc-a97607ac24ca-kube-api-access-2tjwz\") pod \"nova-metadata-0\" (UID: \"03b15eae-6cba-40db-b2bc-a97607ac24ca\") " pod="openstack/nova-metadata-0" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.011169 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.036912 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 20:00:11 crc kubenswrapper[5007]: W0218 20:00:11.571500 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3175b1c_fe13_4e4f_b5ce_748a69d02913.slice/crio-8a5e1d8c9eb6ec27e47398444e5439417f9be9a1c43e87444a94395cf2e1ee59 WatchSource:0}: Error finding container 8a5e1d8c9eb6ec27e47398444e5439417f9be9a1c43e87444a94395cf2e1ee59: Status 404 returned error can't find the container with id 8a5e1d8c9eb6ec27e47398444e5439417f9be9a1c43e87444a94395cf2e1ee59 Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.579119 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.610244 5007 generic.go:334] "Generic (PLEG): container finished" podID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerID="9275bada7258c13d9fe560a69089a147ef4edd7e105e36cea578daaf6d76b74e" exitCode=0 Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.610486 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c8def50-80aa-49ff-9d1e-241b38d2a401","Type":"ContainerDied","Data":"9275bada7258c13d9fe560a69089a147ef4edd7e105e36cea578daaf6d76b74e"} Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.610562 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c8def50-80aa-49ff-9d1e-241b38d2a401","Type":"ContainerDied","Data":"c70f1fe0c08095dc27ade4394573e637dda3dab2df34f20c1011c554297e77c4"} Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.610614 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70f1fe0c08095dc27ade4394573e637dda3dab2df34f20c1011c554297e77c4" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.614409 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f3175b1c-fe13-4e4f-b5ce-748a69d02913","Type":"ContainerStarted","Data":"8a5e1d8c9eb6ec27e47398444e5439417f9be9a1c43e87444a94395cf2e1ee59"} Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.729771 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 20:00:11 crc kubenswrapper[5007]: W0218 20:00:11.736699 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03b15eae_6cba_40db_b2bc_a97607ac24ca.slice/crio-a7ce5bf39639414f1b00e7e7e5b3d910acff4d051b686ae88d7af2b009961e53 WatchSource:0}: Error finding container a7ce5bf39639414f1b00e7e7e5b3d910acff4d051b686ae88d7af2b009961e53: Status 404 returned error can't find the container with id a7ce5bf39639414f1b00e7e7e5b3d910acff4d051b686ae88d7af2b009961e53 Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.742101 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.812480 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-internal-tls-certs\") pod \"7c8def50-80aa-49ff-9d1e-241b38d2a401\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.812588 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c8def50-80aa-49ff-9d1e-241b38d2a401-logs\") pod \"7c8def50-80aa-49ff-9d1e-241b38d2a401\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.812709 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-config-data\") pod \"7c8def50-80aa-49ff-9d1e-241b38d2a401\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.812775 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-public-tls-certs\") pod \"7c8def50-80aa-49ff-9d1e-241b38d2a401\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.813003 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjtb5\" (UniqueName: \"kubernetes.io/projected/7c8def50-80aa-49ff-9d1e-241b38d2a401-kube-api-access-tjtb5\") pod \"7c8def50-80aa-49ff-9d1e-241b38d2a401\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.813109 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-combined-ca-bundle\") pod \"7c8def50-80aa-49ff-9d1e-241b38d2a401\" (UID: \"7c8def50-80aa-49ff-9d1e-241b38d2a401\") " Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.815000 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8def50-80aa-49ff-9d1e-241b38d2a401-logs" (OuterVolumeSpecName: "logs") pod "7c8def50-80aa-49ff-9d1e-241b38d2a401" (UID: "7c8def50-80aa-49ff-9d1e-241b38d2a401"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.823491 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8def50-80aa-49ff-9d1e-241b38d2a401-kube-api-access-tjtb5" (OuterVolumeSpecName: "kube-api-access-tjtb5") pod "7c8def50-80aa-49ff-9d1e-241b38d2a401" (UID: "7c8def50-80aa-49ff-9d1e-241b38d2a401"). InnerVolumeSpecName "kube-api-access-tjtb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.856006 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c8def50-80aa-49ff-9d1e-241b38d2a401" (UID: "7c8def50-80aa-49ff-9d1e-241b38d2a401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.867469 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-config-data" (OuterVolumeSpecName: "config-data") pod "7c8def50-80aa-49ff-9d1e-241b38d2a401" (UID: "7c8def50-80aa-49ff-9d1e-241b38d2a401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.877734 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7c8def50-80aa-49ff-9d1e-241b38d2a401" (UID: "7c8def50-80aa-49ff-9d1e-241b38d2a401"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.883021 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7c8def50-80aa-49ff-9d1e-241b38d2a401" (UID: "7c8def50-80aa-49ff-9d1e-241b38d2a401"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.915287 5007 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.915447 5007 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c8def50-80aa-49ff-9d1e-241b38d2a401-logs\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.915556 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.915606 5007 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.915658 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjtb5\" (UniqueName: \"kubernetes.io/projected/7c8def50-80aa-49ff-9d1e-241b38d2a401-kube-api-access-tjtb5\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.915708 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8def50-80aa-49ff-9d1e-241b38d2a401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.920180 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587de3e4-71d0-4a0f-8312-01fcc185dd4f" path="/var/lib/kubelet/pods/587de3e4-71d0-4a0f-8312-01fcc185dd4f/volumes" Feb 18 20:00:11 crc kubenswrapper[5007]: I0218 20:00:11.921210 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9773272a-428b-4151-9c85-ce97c98a467e" path="/var/lib/kubelet/pods/9773272a-428b-4151-9c85-ce97c98a467e/volumes" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.626907 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03b15eae-6cba-40db-b2bc-a97607ac24ca","Type":"ContainerStarted","Data":"48cb4f367d536c8e62b713286eaf82066a6d696c288cf93b1f57c948fcc67515"} Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.628112 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03b15eae-6cba-40db-b2bc-a97607ac24ca","Type":"ContainerStarted","Data":"a4bb00628ca75cd0d1a0375283e01d9ea6e3cb151fe5827f550d6ee5b3df0135"} Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.628192 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03b15eae-6cba-40db-b2bc-a97607ac24ca","Type":"ContainerStarted","Data":"a7ce5bf39639414f1b00e7e7e5b3d910acff4d051b686ae88d7af2b009961e53"} Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.628393 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.628526 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f3175b1c-fe13-4e4f-b5ce-748a69d02913","Type":"ContainerStarted","Data":"ac89ac9e9b3e77dfa036a1477337522e09c7eb407ddcb595075df2545319e99f"} Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.655882 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.65585419 podStartE2EDuration="2.65585419s" podCreationTimestamp="2026-02-18 20:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:12.647764138 +0000 UTC m=+1297.429883702" watchObservedRunningTime="2026-02-18 20:00:12.65585419 +0000 UTC m=+1297.437973724" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.685167 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.694225 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.708222 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.708202783 podStartE2EDuration="2.708202783s" podCreationTimestamp="2026-02-18 20:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:12.700997457 +0000 UTC m=+1297.483116991" watchObservedRunningTime="2026-02-18 20:00:12.708202783 +0000 UTC m=+1297.490322307" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.727182 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 20:00:12 crc kubenswrapper[5007]: E0218 20:00:12.727693 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-api" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.727718 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-api" Feb 18 20:00:12 crc kubenswrapper[5007]: E0218 20:00:12.727746 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-log" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.727753 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-log" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.727987 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-api" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.728009 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" containerName="nova-api-log" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.729304 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.732039 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.732191 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.732115 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.736164 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.837485 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-config-data\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.837541 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqkh8\" (UniqueName: \"kubernetes.io/projected/2febcee3-885c-427f-be6b-949fcd4a9f65-kube-api-access-cqkh8\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.837597 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.837666 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-public-tls-certs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.837711 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2febcee3-885c-427f-be6b-949fcd4a9f65-logs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.837725 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.939161 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.939201 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2febcee3-885c-427f-be6b-949fcd4a9f65-logs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.939247 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-config-data\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.939275 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqkh8\" (UniqueName: \"kubernetes.io/projected/2febcee3-885c-427f-be6b-949fcd4a9f65-kube-api-access-cqkh8\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.939322 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.939388 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-public-tls-certs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.940091 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2febcee3-885c-427f-be6b-949fcd4a9f65-logs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.944765 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.944930 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-config-data\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.945850 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-public-tls-certs\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.954951 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2febcee3-885c-427f-be6b-949fcd4a9f65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:12 crc kubenswrapper[5007]: I0218 20:00:12.962094 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqkh8\" (UniqueName: \"kubernetes.io/projected/2febcee3-885c-427f-be6b-949fcd4a9f65-kube-api-access-cqkh8\") pod \"nova-api-0\" (UID: \"2febcee3-885c-427f-be6b-949fcd4a9f65\") " pod="openstack/nova-api-0" Feb 18 20:00:13 crc kubenswrapper[5007]: I0218 20:00:13.053693 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 20:00:13 crc kubenswrapper[5007]: W0218 20:00:13.516783 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2febcee3_885c_427f_be6b_949fcd4a9f65.slice/crio-aa4c3fc0de4840407b1565777628fae07a0c4f9583b988caa59d880a9bbf0b1f WatchSource:0}: Error finding container aa4c3fc0de4840407b1565777628fae07a0c4f9583b988caa59d880a9bbf0b1f: Status 404 returned error can't find the container with id aa4c3fc0de4840407b1565777628fae07a0c4f9583b988caa59d880a9bbf0b1f Feb 18 20:00:13 crc kubenswrapper[5007]: I0218 20:00:13.539450 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 20:00:13 crc kubenswrapper[5007]: I0218 20:00:13.641023 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2febcee3-885c-427f-be6b-949fcd4a9f65","Type":"ContainerStarted","Data":"aa4c3fc0de4840407b1565777628fae07a0c4f9583b988caa59d880a9bbf0b1f"} Feb 18 20:00:13 crc kubenswrapper[5007]: I0218 20:00:13.921920 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8def50-80aa-49ff-9d1e-241b38d2a401" path="/var/lib/kubelet/pods/7c8def50-80aa-49ff-9d1e-241b38d2a401/volumes" Feb 18 20:00:14 crc kubenswrapper[5007]: I0218 20:00:14.653848 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2febcee3-885c-427f-be6b-949fcd4a9f65","Type":"ContainerStarted","Data":"4c1d4f6a00b4af218b0702cd0622a734ea7f9c02371611defe280d9709a7a778"} Feb 18 20:00:14 crc kubenswrapper[5007]: I0218 20:00:14.654161 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2febcee3-885c-427f-be6b-949fcd4a9f65","Type":"ContainerStarted","Data":"c68d2e3532d9297a55c7f1e6f81d2d6bfa925ebf6f185871cd370f14956b4569"} Feb 18 20:00:14 crc kubenswrapper[5007]: I0218 20:00:14.678602 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.678566235 podStartE2EDuration="2.678566235s" podCreationTimestamp="2026-02-18 20:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:14.672412338 +0000 UTC m=+1299.454531862" watchObservedRunningTime="2026-02-18 20:00:14.678566235 +0000 UTC m=+1299.460685799" Feb 18 20:00:16 crc kubenswrapper[5007]: I0218 20:00:16.012259 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 20:00:16 crc kubenswrapper[5007]: I0218 20:00:16.037710 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 20:00:16 crc kubenswrapper[5007]: I0218 20:00:16.037794 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 20:00:21 crc kubenswrapper[5007]: I0218 20:00:21.012035 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 20:00:21 crc kubenswrapper[5007]: I0218 20:00:21.037822 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 20:00:21 crc kubenswrapper[5007]: I0218 20:00:21.038201 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 20:00:21 crc kubenswrapper[5007]: I0218 20:00:21.063712 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 20:00:21 crc kubenswrapper[5007]: I0218 20:00:21.767692 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 20:00:22 crc kubenswrapper[5007]: I0218 20:00:22.057704 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="03b15eae-6cba-40db-b2bc-a97607ac24ca" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 20:00:22 crc kubenswrapper[5007]: I0218 20:00:22.057762 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="03b15eae-6cba-40db-b2bc-a97607ac24ca" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 20:00:23 crc kubenswrapper[5007]: I0218 20:00:23.386298 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 20:00:23 crc kubenswrapper[5007]: I0218 20:00:23.386353 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 20:00:24 crc kubenswrapper[5007]: I0218 20:00:24.396734 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2febcee3-885c-427f-be6b-949fcd4a9f65" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:00:24 crc kubenswrapper[5007]: I0218 20:00:24.396731 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2febcee3-885c-427f-be6b-949fcd4a9f65" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 20:00:28 crc kubenswrapper[5007]: I0218 20:00:28.085925 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 20:00:31 crc kubenswrapper[5007]: I0218 20:00:31.046497 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 20:00:31 crc kubenswrapper[5007]: I0218 20:00:31.049930 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 20:00:31 crc kubenswrapper[5007]: I0218 20:00:31.057789 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 20:00:31 crc kubenswrapper[5007]: I0218 20:00:31.850418 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 20:00:33 crc kubenswrapper[5007]: I0218 20:00:33.072746 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 20:00:33 crc kubenswrapper[5007]: I0218 20:00:33.073906 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 20:00:33 crc kubenswrapper[5007]: I0218 20:00:33.076881 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 20:00:33 crc kubenswrapper[5007]: I0218 20:00:33.089532 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 20:00:33 crc kubenswrapper[5007]: I0218 20:00:33.868744 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 20:00:33 crc kubenswrapper[5007]: I0218 20:00:33.885210 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 20:00:41 crc kubenswrapper[5007]: I0218 20:00:41.968779 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 20:00:42 crc kubenswrapper[5007]: I0218 20:00:42.839062 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 20:00:45 crc kubenswrapper[5007]: I0218 20:00:45.432023 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerName="rabbitmq" containerID="cri-o://a0907fd6b319d397ea4996e29f9b606ce47dcc2e2611116a8afff4bb17779685" gracePeriod=604797 Feb 18 20:00:45 crc kubenswrapper[5007]: I0218 20:00:45.663893 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:00:45 crc kubenswrapper[5007]: I0218 20:00:45.664171 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:00:46 crc kubenswrapper[5007]: I0218 20:00:46.198047 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerName="rabbitmq" containerID="cri-o://ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc" gracePeriod=604797 Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.020966 5007 generic.go:334] "Generic (PLEG): container finished" podID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerID="a0907fd6b319d397ea4996e29f9b606ce47dcc2e2611116a8afff4bb17779685" exitCode=0 Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.021054 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e34afaa-d033-4975-9f1f-e81ab6b743a9","Type":"ContainerDied","Data":"a0907fd6b319d397ea4996e29f9b606ce47dcc2e2611116a8afff4bb17779685"} Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.021322 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e34afaa-d033-4975-9f1f-e81ab6b743a9","Type":"ContainerDied","Data":"1fc1052cf9179ee58f8c09083bb2c5125974403b297a98eb6776390acb1936d5"} Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.021339 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fc1052cf9179ee58f8c09083bb2c5125974403b297a98eb6776390acb1936d5" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.103003 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.118801 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-confd\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.118922 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-plugins-conf\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.119093 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-plugins\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.119159 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e34afaa-d033-4975-9f1f-e81ab6b743a9-erlang-cookie-secret\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.119210 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-config-data\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.119253 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.119470 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e34afaa-d033-4975-9f1f-e81ab6b743a9-pod-info\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.119556 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b8p5\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-kube-api-access-5b8p5\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.119611 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.119621 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-tls\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.120351 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.120372 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-server-conf\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.120603 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-erlang-cookie\") pod \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\" (UID: \"7e34afaa-d033-4975-9f1f-e81ab6b743a9\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.121407 5007 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.121466 5007 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.121977 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.127827 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-kube-api-access-5b8p5" (OuterVolumeSpecName: "kube-api-access-5b8p5") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "kube-api-access-5b8p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.128756 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.129359 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.150351 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7e34afaa-d033-4975-9f1f-e81ab6b743a9-pod-info" (OuterVolumeSpecName: "pod-info") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.150491 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e34afaa-d033-4975-9f1f-e81ab6b743a9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.223846 5007 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e34afaa-d033-4975-9f1f-e81ab6b743a9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.223894 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.223907 5007 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e34afaa-d033-4975-9f1f-e81ab6b743a9-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.223920 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b8p5\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-kube-api-access-5b8p5\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.223933 5007 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.223945 5007 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.307465 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-server-conf" (OuterVolumeSpecName: "server-conf") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.324249 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.326329 5007 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.326364 5007 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e34afaa-d033-4975-9f1f-e81ab6b743a9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.328766 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-config-data" (OuterVolumeSpecName: "config-data") pod "7e34afaa-d033-4975-9f1f-e81ab6b743a9" (UID: "7e34afaa-d033-4975-9f1f-e81ab6b743a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.337035 5007 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.435058 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e34afaa-d033-4975-9f1f-e81ab6b743a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.435101 5007 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.868250 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.960914 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-erlang-cookie\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.960963 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-config-data\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961031 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-confd\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961143 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961218 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-server-conf\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961247 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48869d89-959c-4f88-b8cb-1eb83e7fce30-pod-info\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961269 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-plugins-conf\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961299 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-plugins\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961322 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-tls\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961387 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvc2m\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-kube-api-access-tvc2m\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961411 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48869d89-959c-4f88-b8cb-1eb83e7fce30-erlang-cookie-secret\") pod \"48869d89-959c-4f88-b8cb-1eb83e7fce30\" (UID: \"48869d89-959c-4f88-b8cb-1eb83e7fce30\") " Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961414 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.961845 5007 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.963000 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.964044 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.970879 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/48869d89-959c-4f88-b8cb-1eb83e7fce30-pod-info" (OuterVolumeSpecName: "pod-info") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.971018 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48869d89-959c-4f88-b8cb-1eb83e7fce30-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.973594 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.975140 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-kube-api-access-tvc2m" (OuterVolumeSpecName: "kube-api-access-tvc2m") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "kube-api-access-tvc2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[5007]: I0218 20:00:47.977577 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.030671 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-config-data" (OuterVolumeSpecName: "config-data") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.037683 5007 generic.go:334] "Generic (PLEG): container finished" podID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerID="ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc" exitCode=0 Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.037810 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.038198 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48869d89-959c-4f88-b8cb-1eb83e7fce30","Type":"ContainerDied","Data":"ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc"} Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.038237 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.038248 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"48869d89-959c-4f88-b8cb-1eb83e7fce30","Type":"ContainerDied","Data":"36a0b55dac1e28dec07db60df605f6e3dc266a9fcc1e8b64ea27c146971e7ec0"} Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.038259 5007 scope.go:117] "RemoveContainer" containerID="ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.065856 5007 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.065950 5007 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.065965 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvc2m\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-kube-api-access-tvc2m\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.065977 5007 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/48869d89-959c-4f88-b8cb-1eb83e7fce30-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.066052 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.066106 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.066180 5007 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/48869d89-959c-4f88-b8cb-1eb83e7fce30-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.066196 5007 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.067697 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-server-conf" (OuterVolumeSpecName: "server-conf") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.078126 5007 scope.go:117] "RemoveContainer" containerID="f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.091465 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.113007 5007 scope.go:117] "RemoveContainer" containerID="ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc" Feb 18 20:00:48 crc kubenswrapper[5007]: E0218 20:00:48.115531 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc\": container with ID starting with ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc not found: ID does not exist" containerID="ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.115566 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc"} err="failed to get container status \"ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc\": rpc error: code = NotFound desc = could not find container \"ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc\": container with ID starting with ea2a5aa1525880cc617d6963e5195462a6b998716448bc52265d04bf9be099dc not found: ID does not exist" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.115593 5007 scope.go:117] "RemoveContainer" containerID="f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.115883 5007 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 18 20:00:48 crc kubenswrapper[5007]: E0218 20:00:48.115917 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247\": container with ID starting with f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247 not found: ID does not exist" containerID="f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.115970 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247"} err="failed to get container status \"f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247\": rpc error: code = NotFound desc = could not find container \"f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247\": container with ID starting with f191b0d60a7f09d46f1ce76499a3a0de5097bdafd084bf1c8da964160512e247 not found: ID does not exist" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.118320 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.144878 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 20:00:48 crc kubenswrapper[5007]: E0218 20:00:48.145567 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerName="setup-container" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.145586 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerName="setup-container" Feb 18 20:00:48 crc kubenswrapper[5007]: E0218 20:00:48.145600 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerName="rabbitmq" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.145607 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerName="rabbitmq" Feb 18 20:00:48 crc kubenswrapper[5007]: E0218 20:00:48.145620 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerName="rabbitmq" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.145626 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerName="rabbitmq" Feb 18 20:00:48 crc kubenswrapper[5007]: E0218 20:00:48.145703 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerName="setup-container" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.145711 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerName="setup-container" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.145980 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="48869d89-959c-4f88-b8cb-1eb83e7fce30" containerName="rabbitmq" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.145998 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" containerName="rabbitmq" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.147184 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.152907 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.153142 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.153319 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.154413 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.154561 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.154755 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.154952 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nnmpd" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.168193 5007 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.168231 5007 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/48869d89-959c-4f88-b8cb-1eb83e7fce30-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.171117 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.197045 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "48869d89-959c-4f88-b8cb-1eb83e7fce30" (UID: "48869d89-959c-4f88-b8cb-1eb83e7fce30"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.270257 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.270584 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.270672 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.270702 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.270775 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.270798 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c6a7259-f35d-452f-a26b-79be196180ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.270825 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c6a7259-f35d-452f-a26b-79be196180ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.270910 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.271047 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmdw\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-kube-api-access-frmdw\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.271148 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.271243 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.271374 5007 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/48869d89-959c-4f88-b8cb-1eb83e7fce30-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.372565 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.372860 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.372909 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.372931 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.372963 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c6a7259-f35d-452f-a26b-79be196180ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.372984 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.373000 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c6a7259-f35d-452f-a26b-79be196180ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.373019 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.373050 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frmdw\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-kube-api-access-frmdw\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.373082 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.373109 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.373126 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.373896 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.374118 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.376721 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.377307 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.378085 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.378905 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c6a7259-f35d-452f-a26b-79be196180ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.381582 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.384212 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.384264 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.385495 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c6a7259-f35d-452f-a26b-79be196180ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.394113 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c6a7259-f35d-452f-a26b-79be196180ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.410185 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frmdw\" (UniqueName: \"kubernetes.io/projected/7c6a7259-f35d-452f-a26b-79be196180ce-kube-api-access-frmdw\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.410259 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.412406 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.420063 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.420297 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.420471 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.420617 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.420729 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h4sq6" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.420832 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.421154 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.438423 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.447262 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"7c6a7259-f35d-452f-a26b-79be196180ce\") " pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475322 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475610 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475650 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475669 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475702 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475737 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475761 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475803 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475844 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475873 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42wd\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-kube-api-access-x42wd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.475906 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.480337 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580327 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x42wd\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-kube-api-access-x42wd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580416 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580528 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580585 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580649 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580684 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580808 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580874 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580911 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.580983 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.581025 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.581573 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.585169 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.586052 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.586149 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.586731 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.586749 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.586908 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.586979 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.591614 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.596029 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.608399 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42wd\" (UniqueName: \"kubernetes.io/projected/f35fa3cf-13db-47c4-b7b0-d67dd97674ba-kube-api-access-x42wd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.621029 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f35fa3cf-13db-47c4-b7b0-d67dd97674ba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.737602 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:00:48 crc kubenswrapper[5007]: I0218 20:00:48.757227 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 20:00:49 crc kubenswrapper[5007]: I0218 20:00:49.047623 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c6a7259-f35d-452f-a26b-79be196180ce","Type":"ContainerStarted","Data":"d6a3598a72cfb8638cf1f61d840fd8bea075d94b6687c129da558ecb7d12442e"} Feb 18 20:00:49 crc kubenswrapper[5007]: I0218 20:00:49.267125 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 20:00:49 crc kubenswrapper[5007]: W0218 20:00:49.267714 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf35fa3cf_13db_47c4_b7b0_d67dd97674ba.slice/crio-930b632d414080031d0b01ac49cea3258a7ecb71e4f71c2c25c0a3fd9a74df12 WatchSource:0}: Error finding container 930b632d414080031d0b01ac49cea3258a7ecb71e4f71c2c25c0a3fd9a74df12: Status 404 returned error can't find the container with id 930b632d414080031d0b01ac49cea3258a7ecb71e4f71c2c25c0a3fd9a74df12 Feb 18 20:00:49 crc kubenswrapper[5007]: I0218 20:00:49.920242 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48869d89-959c-4f88-b8cb-1eb83e7fce30" path="/var/lib/kubelet/pods/48869d89-959c-4f88-b8cb-1eb83e7fce30/volumes" Feb 18 20:00:49 crc kubenswrapper[5007]: I0218 20:00:49.921248 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e34afaa-d033-4975-9f1f-e81ab6b743a9" path="/var/lib/kubelet/pods/7e34afaa-d033-4975-9f1f-e81ab6b743a9/volumes" Feb 18 20:00:50 crc kubenswrapper[5007]: I0218 20:00:50.058031 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f35fa3cf-13db-47c4-b7b0-d67dd97674ba","Type":"ContainerStarted","Data":"930b632d414080031d0b01ac49cea3258a7ecb71e4f71c2c25c0a3fd9a74df12"} Feb 18 20:00:51 crc kubenswrapper[5007]: I0218 20:00:51.072736 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c6a7259-f35d-452f-a26b-79be196180ce","Type":"ContainerStarted","Data":"1d9bed4b2cca2460d70299b0e1ba4507304b1b301e602936e292a2feadd284af"} Feb 18 20:00:51 crc kubenswrapper[5007]: I0218 20:00:51.077104 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f35fa3cf-13db-47c4-b7b0-d67dd97674ba","Type":"ContainerStarted","Data":"c60e68b82575913344513d6bff29ebdb99846ffea71d4393df4f42f39dd2da7b"} Feb 18 20:00:55 crc kubenswrapper[5007]: I0218 20:00:55.943423 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64d7f5f6d7-7tvdr"] Feb 18 20:00:55 crc kubenswrapper[5007]: I0218 20:00:55.947036 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:55 crc kubenswrapper[5007]: I0218 20:00:55.949804 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 20:00:55 crc kubenswrapper[5007]: I0218 20:00:55.959595 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64d7f5f6d7-7tvdr"] Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.029982 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-sb\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.030094 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.030166 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8h4h\" (UniqueName: \"kubernetes.io/projected/a3db05b5-5b48-4118-963d-86e22b4e78bc-kube-api-access-c8h4h\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.030195 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-swift-storage-0\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.030255 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-nb\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.030292 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-svc\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.030365 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-config\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.132245 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-swift-storage-0\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.132345 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-nb\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.132389 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-svc\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.132450 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-config\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.132496 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-sb\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.132536 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.132599 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8h4h\" (UniqueName: \"kubernetes.io/projected/a3db05b5-5b48-4118-963d-86e22b4e78bc-kube-api-access-c8h4h\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.133555 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-swift-storage-0\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.133753 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-svc\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.133814 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-sb\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.133847 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-config\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.134181 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.134352 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-nb\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.181534 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8h4h\" (UniqueName: \"kubernetes.io/projected/a3db05b5-5b48-4118-963d-86e22b4e78bc-kube-api-access-c8h4h\") pod \"dnsmasq-dns-64d7f5f6d7-7tvdr\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.268355 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:56 crc kubenswrapper[5007]: I0218 20:00:56.894503 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64d7f5f6d7-7tvdr"] Feb 18 20:00:57 crc kubenswrapper[5007]: I0218 20:00:57.134396 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" event={"ID":"a3db05b5-5b48-4118-963d-86e22b4e78bc","Type":"ContainerStarted","Data":"5f3510abb5caf8772ceda907e84c78daaad22bb7b372a9d97d18f06ec1ac1416"} Feb 18 20:00:58 crc kubenswrapper[5007]: I0218 20:00:58.150912 5007 generic.go:334] "Generic (PLEG): container finished" podID="a3db05b5-5b48-4118-963d-86e22b4e78bc" containerID="78813c149a9a022e37e970d1ba71725aab4717fc7e06060aa49d9cd5f2554bb3" exitCode=0 Feb 18 20:00:58 crc kubenswrapper[5007]: I0218 20:00:58.150953 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" event={"ID":"a3db05b5-5b48-4118-963d-86e22b4e78bc","Type":"ContainerDied","Data":"78813c149a9a022e37e970d1ba71725aab4717fc7e06060aa49d9cd5f2554bb3"} Feb 18 20:00:59 crc kubenswrapper[5007]: I0218 20:00:59.163148 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" event={"ID":"a3db05b5-5b48-4118-963d-86e22b4e78bc","Type":"ContainerStarted","Data":"6ba17ac0e6456e2667da2260eb0157dd5a5553fbb6cb89866ee3115645a49b06"} Feb 18 20:00:59 crc kubenswrapper[5007]: I0218 20:00:59.164751 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:00:59 crc kubenswrapper[5007]: I0218 20:00:59.186825 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" podStartSLOduration=4.186809142 podStartE2EDuration="4.186809142s" podCreationTimestamp="2026-02-18 20:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:59.185988248 +0000 UTC m=+1343.968107772" watchObservedRunningTime="2026-02-18 20:00:59.186809142 +0000 UTC m=+1343.968928656" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.140931 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524081-xpr64"] Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.143880 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.150756 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-xpr64"] Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.243276 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-config-data\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.243351 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-combined-ca-bundle\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.243373 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lf5r\" (UniqueName: \"kubernetes.io/projected/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-kube-api-access-6lf5r\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.243468 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-fernet-keys\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.345939 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-config-data\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.346014 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-combined-ca-bundle\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.346054 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lf5r\" (UniqueName: \"kubernetes.io/projected/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-kube-api-access-6lf5r\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.346088 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-fernet-keys\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.353327 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-fernet-keys\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.360065 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-combined-ca-bundle\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.360834 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-config-data\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.386620 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lf5r\" (UniqueName: \"kubernetes.io/projected/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-kube-api-access-6lf5r\") pod \"keystone-cron-29524081-xpr64\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.475684 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:00 crc kubenswrapper[5007]: I0218 20:01:00.933415 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-xpr64"] Feb 18 20:01:01 crc kubenswrapper[5007]: I0218 20:01:01.208537 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-xpr64" event={"ID":"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4","Type":"ContainerStarted","Data":"c88af8f9771773b8ae0588d5e60dd5efc3d0bb89e53214c2702bf6f683162ad2"} Feb 18 20:01:01 crc kubenswrapper[5007]: I0218 20:01:01.208589 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-xpr64" event={"ID":"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4","Type":"ContainerStarted","Data":"c9e597a9b79d4497ff42a981f8fa7abc0650722d74c32cf25768bd0bff4d7f01"} Feb 18 20:01:01 crc kubenswrapper[5007]: I0218 20:01:01.233298 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524081-xpr64" podStartSLOduration=1.233269068 podStartE2EDuration="1.233269068s" podCreationTimestamp="2026-02-18 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:01.224634941 +0000 UTC m=+1346.006754505" watchObservedRunningTime="2026-02-18 20:01:01.233269068 +0000 UTC m=+1346.015388632" Feb 18 20:01:04 crc kubenswrapper[5007]: I0218 20:01:04.248593 5007 generic.go:334] "Generic (PLEG): container finished" podID="cf50f3ee-02bf-4dc6-a717-0a32705fe0d4" containerID="c88af8f9771773b8ae0588d5e60dd5efc3d0bb89e53214c2702bf6f683162ad2" exitCode=0 Feb 18 20:01:04 crc kubenswrapper[5007]: I0218 20:01:04.248729 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-xpr64" event={"ID":"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4","Type":"ContainerDied","Data":"c88af8f9771773b8ae0588d5e60dd5efc3d0bb89e53214c2702bf6f683162ad2"} Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.753750 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.876179 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-combined-ca-bundle\") pod \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.876269 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-config-data\") pod \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.876406 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-fernet-keys\") pod \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.876483 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lf5r\" (UniqueName: \"kubernetes.io/projected/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-kube-api-access-6lf5r\") pod \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\" (UID: \"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4\") " Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.885234 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-kube-api-access-6lf5r" (OuterVolumeSpecName: "kube-api-access-6lf5r") pod "cf50f3ee-02bf-4dc6-a717-0a32705fe0d4" (UID: "cf50f3ee-02bf-4dc6-a717-0a32705fe0d4"). InnerVolumeSpecName "kube-api-access-6lf5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.886170 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cf50f3ee-02bf-4dc6-a717-0a32705fe0d4" (UID: "cf50f3ee-02bf-4dc6-a717-0a32705fe0d4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.930618 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf50f3ee-02bf-4dc6-a717-0a32705fe0d4" (UID: "cf50f3ee-02bf-4dc6-a717-0a32705fe0d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.962654 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-config-data" (OuterVolumeSpecName: "config-data") pod "cf50f3ee-02bf-4dc6-a717-0a32705fe0d4" (UID: "cf50f3ee-02bf-4dc6-a717-0a32705fe0d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.979122 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.979158 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.979169 5007 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[5007]: I0218 20:01:05.979179 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lf5r\" (UniqueName: \"kubernetes.io/projected/cf50f3ee-02bf-4dc6-a717-0a32705fe0d4-kube-api-access-6lf5r\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.270370 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-xpr64" event={"ID":"cf50f3ee-02bf-4dc6-a717-0a32705fe0d4","Type":"ContainerDied","Data":"c9e597a9b79d4497ff42a981f8fa7abc0650722d74c32cf25768bd0bff4d7f01"} Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.270411 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9e597a9b79d4497ff42a981f8fa7abc0650722d74c32cf25768bd0bff4d7f01" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.270451 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-xpr64" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.270770 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.358602 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr"] Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.358867 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" podUID="de972c5a-b715-4927-a929-4a36ac39ae34" containerName="dnsmasq-dns" containerID="cri-o://bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523" gracePeriod=10 Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.545869 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58df99d78f-r45sp"] Feb 18 20:01:06 crc kubenswrapper[5007]: E0218 20:01:06.546592 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf50f3ee-02bf-4dc6-a717-0a32705fe0d4" containerName="keystone-cron" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.546610 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf50f3ee-02bf-4dc6-a717-0a32705fe0d4" containerName="keystone-cron" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.546804 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf50f3ee-02bf-4dc6-a717-0a32705fe0d4" containerName="keystone-cron" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.547887 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.559677 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58df99d78f-r45sp"] Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.693221 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwvt\" (UniqueName: \"kubernetes.io/projected/12093548-5c6d-4f46-a566-93a6ff931c68-kube-api-access-cpwvt\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.693286 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-openstack-edpm-ipam\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.693337 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-config\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.693392 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-ovsdbserver-sb\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.693466 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-dns-swift-storage-0\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.693527 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-dns-svc\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.693558 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-ovsdbserver-nb\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.795223 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-openstack-edpm-ipam\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.795314 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-config\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.795378 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-ovsdbserver-sb\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.795421 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-dns-swift-storage-0\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.795499 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-dns-svc\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.795523 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-ovsdbserver-nb\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.795609 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwvt\" (UniqueName: \"kubernetes.io/projected/12093548-5c6d-4f46-a566-93a6ff931c68-kube-api-access-cpwvt\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.796909 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-ovsdbserver-sb\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.800977 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-dns-svc\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.805176 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-openstack-edpm-ipam\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.805275 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-dns-swift-storage-0\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.805305 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-ovsdbserver-nb\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.805697 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12093548-5c6d-4f46-a566-93a6ff931c68-config\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.819377 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwvt\" (UniqueName: \"kubernetes.io/projected/12093548-5c6d-4f46-a566-93a6ff931c68-kube-api-access-cpwvt\") pod \"dnsmasq-dns-58df99d78f-r45sp\" (UID: \"12093548-5c6d-4f46-a566-93a6ff931c68\") " pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.877587 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:06 crc kubenswrapper[5007]: I0218 20:01:06.985545 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.101210 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-nb\") pod \"de972c5a-b715-4927-a929-4a36ac39ae34\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.101326 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-swift-storage-0\") pod \"de972c5a-b715-4927-a929-4a36ac39ae34\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.101391 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9gzv\" (UniqueName: \"kubernetes.io/projected/de972c5a-b715-4927-a929-4a36ac39ae34-kube-api-access-r9gzv\") pod \"de972c5a-b715-4927-a929-4a36ac39ae34\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.101603 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-svc\") pod \"de972c5a-b715-4927-a929-4a36ac39ae34\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.101639 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-config\") pod \"de972c5a-b715-4927-a929-4a36ac39ae34\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.101663 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-sb\") pod \"de972c5a-b715-4927-a929-4a36ac39ae34\" (UID: \"de972c5a-b715-4927-a929-4a36ac39ae34\") " Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.119985 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de972c5a-b715-4927-a929-4a36ac39ae34-kube-api-access-r9gzv" (OuterVolumeSpecName: "kube-api-access-r9gzv") pod "de972c5a-b715-4927-a929-4a36ac39ae34" (UID: "de972c5a-b715-4927-a929-4a36ac39ae34"). InnerVolumeSpecName "kube-api-access-r9gzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.164071 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de972c5a-b715-4927-a929-4a36ac39ae34" (UID: "de972c5a-b715-4927-a929-4a36ac39ae34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.191899 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de972c5a-b715-4927-a929-4a36ac39ae34" (UID: "de972c5a-b715-4927-a929-4a36ac39ae34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.195789 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de972c5a-b715-4927-a929-4a36ac39ae34" (UID: "de972c5a-b715-4927-a929-4a36ac39ae34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.207087 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.207114 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.207124 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.207132 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9gzv\" (UniqueName: \"kubernetes.io/projected/de972c5a-b715-4927-a929-4a36ac39ae34-kube-api-access-r9gzv\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.207513 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-config" (OuterVolumeSpecName: "config") pod "de972c5a-b715-4927-a929-4a36ac39ae34" (UID: "de972c5a-b715-4927-a929-4a36ac39ae34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.233074 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de972c5a-b715-4927-a929-4a36ac39ae34" (UID: "de972c5a-b715-4927-a929-4a36ac39ae34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.282014 5007 generic.go:334] "Generic (PLEG): container finished" podID="de972c5a-b715-4927-a929-4a36ac39ae34" containerID="bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523" exitCode=0 Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.282064 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" event={"ID":"de972c5a-b715-4927-a929-4a36ac39ae34","Type":"ContainerDied","Data":"bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523"} Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.282094 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" event={"ID":"de972c5a-b715-4927-a929-4a36ac39ae34","Type":"ContainerDied","Data":"a1edbad23c83af3d432a4a0ff7c84623bde8f258708a5cc981898393bc5650f6"} Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.282098 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.282115 5007 scope.go:117] "RemoveContainer" containerID="bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.312776 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.312808 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de972c5a-b715-4927-a929-4a36ac39ae34-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.327755 5007 scope.go:117] "RemoveContainer" containerID="879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.330414 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr"] Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.347932 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr"] Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.366809 5007 scope.go:117] "RemoveContainer" containerID="bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523" Feb 18 20:01:07 crc kubenswrapper[5007]: E0218 20:01:07.373551 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523\": container with ID starting with bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523 not found: ID does not exist" containerID="bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.373605 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523"} err="failed to get container status \"bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523\": rpc error: code = NotFound desc = could not find container \"bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523\": container with ID starting with bb86333fdbad5493d432d9b5edec9ef354cb1c461b43590a1c402e80b791e523 not found: ID does not exist" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.373642 5007 scope.go:117] "RemoveContainer" containerID="879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c" Feb 18 20:01:07 crc kubenswrapper[5007]: E0218 20:01:07.377585 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c\": container with ID starting with 879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c not found: ID does not exist" containerID="879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.377629 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c"} err="failed to get container status \"879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c\": rpc error: code = NotFound desc = could not find container \"879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c\": container with ID starting with 879ebbc7b38f29dc82d48b8d5f4b7371744f7576b64aefa3cf9ecca8e384208c not found: ID does not exist" Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.389607 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58df99d78f-r45sp"] Feb 18 20:01:07 crc kubenswrapper[5007]: W0218 20:01:07.392664 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12093548_5c6d_4f46_a566_93a6ff931c68.slice/crio-b72bbea2caf28aa984b9e7328dc2cc935797a4d093433120cfd540c92b8d456a WatchSource:0}: Error finding container b72bbea2caf28aa984b9e7328dc2cc935797a4d093433120cfd540c92b8d456a: Status 404 returned error can't find the container with id b72bbea2caf28aa984b9e7328dc2cc935797a4d093433120cfd540c92b8d456a Feb 18 20:01:07 crc kubenswrapper[5007]: I0218 20:01:07.923732 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de972c5a-b715-4927-a929-4a36ac39ae34" path="/var/lib/kubelet/pods/de972c5a-b715-4927-a929-4a36ac39ae34/volumes" Feb 18 20:01:08 crc kubenswrapper[5007]: I0218 20:01:08.293409 5007 generic.go:334] "Generic (PLEG): container finished" podID="12093548-5c6d-4f46-a566-93a6ff931c68" containerID="2700675d72314ef8e1f5e6cdf67e08c6f9006eea0348d72946a6d6eed9e8cc5a" exitCode=0 Feb 18 20:01:08 crc kubenswrapper[5007]: I0218 20:01:08.293462 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58df99d78f-r45sp" event={"ID":"12093548-5c6d-4f46-a566-93a6ff931c68","Type":"ContainerDied","Data":"2700675d72314ef8e1f5e6cdf67e08c6f9006eea0348d72946a6d6eed9e8cc5a"} Feb 18 20:01:08 crc kubenswrapper[5007]: I0218 20:01:08.293511 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58df99d78f-r45sp" event={"ID":"12093548-5c6d-4f46-a566-93a6ff931c68","Type":"ContainerStarted","Data":"b72bbea2caf28aa984b9e7328dc2cc935797a4d093433120cfd540c92b8d456a"} Feb 18 20:01:09 crc kubenswrapper[5007]: I0218 20:01:09.307056 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58df99d78f-r45sp" event={"ID":"12093548-5c6d-4f46-a566-93a6ff931c68","Type":"ContainerStarted","Data":"12a9df395711d62b5d16b88c05a4c4a3f959be56dff7d0d793eb27041bba6c74"} Feb 18 20:01:09 crc kubenswrapper[5007]: I0218 20:01:09.307559 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:09 crc kubenswrapper[5007]: I0218 20:01:09.346325 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58df99d78f-r45sp" podStartSLOduration=3.346302784 podStartE2EDuration="3.346302784s" podCreationTimestamp="2026-02-18 20:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:09.321409979 +0000 UTC m=+1354.103529583" watchObservedRunningTime="2026-02-18 20:01:09.346302784 +0000 UTC m=+1354.128422318" Feb 18 20:01:11 crc kubenswrapper[5007]: I0218 20:01:11.809966 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ffb4f4f5c-jtgzr" podUID="de972c5a-b715-4927-a929-4a36ac39ae34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.224:5353: i/o timeout" Feb 18 20:01:15 crc kubenswrapper[5007]: I0218 20:01:15.664331 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:01:15 crc kubenswrapper[5007]: I0218 20:01:15.664913 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:01:16 crc kubenswrapper[5007]: I0218 20:01:16.879653 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58df99d78f-r45sp" Feb 18 20:01:16 crc kubenswrapper[5007]: I0218 20:01:16.960128 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64d7f5f6d7-7tvdr"] Feb 18 20:01:16 crc kubenswrapper[5007]: I0218 20:01:16.960389 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" podUID="a3db05b5-5b48-4118-963d-86e22b4e78bc" containerName="dnsmasq-dns" containerID="cri-o://6ba17ac0e6456e2667da2260eb0157dd5a5553fbb6cb89866ee3115645a49b06" gracePeriod=10 Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.420551 5007 generic.go:334] "Generic (PLEG): container finished" podID="a3db05b5-5b48-4118-963d-86e22b4e78bc" containerID="6ba17ac0e6456e2667da2260eb0157dd5a5553fbb6cb89866ee3115645a49b06" exitCode=0 Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.420597 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" event={"ID":"a3db05b5-5b48-4118-963d-86e22b4e78bc","Type":"ContainerDied","Data":"6ba17ac0e6456e2667da2260eb0157dd5a5553fbb6cb89866ee3115645a49b06"} Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.420910 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" event={"ID":"a3db05b5-5b48-4118-963d-86e22b4e78bc","Type":"ContainerDied","Data":"5f3510abb5caf8772ceda907e84c78daaad22bb7b372a9d97d18f06ec1ac1416"} Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.420933 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f3510abb5caf8772ceda907e84c78daaad22bb7b372a9d97d18f06ec1ac1416" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.464814 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.534135 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-swift-storage-0\") pod \"a3db05b5-5b48-4118-963d-86e22b4e78bc\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.534604 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-svc\") pod \"a3db05b5-5b48-4118-963d-86e22b4e78bc\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.535322 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8h4h\" (UniqueName: \"kubernetes.io/projected/a3db05b5-5b48-4118-963d-86e22b4e78bc-kube-api-access-c8h4h\") pod \"a3db05b5-5b48-4118-963d-86e22b4e78bc\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.535353 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-nb\") pod \"a3db05b5-5b48-4118-963d-86e22b4e78bc\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.535883 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-sb\") pod \"a3db05b5-5b48-4118-963d-86e22b4e78bc\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.535970 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-config\") pod \"a3db05b5-5b48-4118-963d-86e22b4e78bc\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.536007 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-openstack-edpm-ipam\") pod \"a3db05b5-5b48-4118-963d-86e22b4e78bc\" (UID: \"a3db05b5-5b48-4118-963d-86e22b4e78bc\") " Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.540683 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3db05b5-5b48-4118-963d-86e22b4e78bc-kube-api-access-c8h4h" (OuterVolumeSpecName: "kube-api-access-c8h4h") pod "a3db05b5-5b48-4118-963d-86e22b4e78bc" (UID: "a3db05b5-5b48-4118-963d-86e22b4e78bc"). InnerVolumeSpecName "kube-api-access-c8h4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.584215 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3db05b5-5b48-4118-963d-86e22b4e78bc" (UID: "a3db05b5-5b48-4118-963d-86e22b4e78bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.590845 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3db05b5-5b48-4118-963d-86e22b4e78bc" (UID: "a3db05b5-5b48-4118-963d-86e22b4e78bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.600284 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a3db05b5-5b48-4118-963d-86e22b4e78bc" (UID: "a3db05b5-5b48-4118-963d-86e22b4e78bc"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.605788 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3db05b5-5b48-4118-963d-86e22b4e78bc" (UID: "a3db05b5-5b48-4118-963d-86e22b4e78bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.614213 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-config" (OuterVolumeSpecName: "config") pod "a3db05b5-5b48-4118-963d-86e22b4e78bc" (UID: "a3db05b5-5b48-4118-963d-86e22b4e78bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.624381 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3db05b5-5b48-4118-963d-86e22b4e78bc" (UID: "a3db05b5-5b48-4118-963d-86e22b4e78bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.638291 5007 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.638326 5007 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.638335 5007 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.638344 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8h4h\" (UniqueName: \"kubernetes.io/projected/a3db05b5-5b48-4118-963d-86e22b4e78bc-kube-api-access-c8h4h\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.638355 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.638363 5007 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:17 crc kubenswrapper[5007]: I0218 20:01:17.638371 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3db05b5-5b48-4118-963d-86e22b4e78bc-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:18 crc kubenswrapper[5007]: I0218 20:01:18.429292 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64d7f5f6d7-7tvdr" Feb 18 20:01:18 crc kubenswrapper[5007]: I0218 20:01:18.454108 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64d7f5f6d7-7tvdr"] Feb 18 20:01:18 crc kubenswrapper[5007]: I0218 20:01:18.464484 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64d7f5f6d7-7tvdr"] Feb 18 20:01:19 crc kubenswrapper[5007]: I0218 20:01:19.931985 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3db05b5-5b48-4118-963d-86e22b4e78bc" path="/var/lib/kubelet/pods/a3db05b5-5b48-4118-963d-86e22b4e78bc/volumes" Feb 18 20:01:23 crc kubenswrapper[5007]: I0218 20:01:23.489693 5007 generic.go:334] "Generic (PLEG): container finished" podID="7c6a7259-f35d-452f-a26b-79be196180ce" containerID="1d9bed4b2cca2460d70299b0e1ba4507304b1b301e602936e292a2feadd284af" exitCode=0 Feb 18 20:01:23 crc kubenswrapper[5007]: I0218 20:01:23.489810 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c6a7259-f35d-452f-a26b-79be196180ce","Type":"ContainerDied","Data":"1d9bed4b2cca2460d70299b0e1ba4507304b1b301e602936e292a2feadd284af"} Feb 18 20:01:23 crc kubenswrapper[5007]: I0218 20:01:23.492449 5007 generic.go:334] "Generic (PLEG): container finished" podID="f35fa3cf-13db-47c4-b7b0-d67dd97674ba" containerID="c60e68b82575913344513d6bff29ebdb99846ffea71d4393df4f42f39dd2da7b" exitCode=0 Feb 18 20:01:23 crc kubenswrapper[5007]: I0218 20:01:23.492500 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f35fa3cf-13db-47c4-b7b0-d67dd97674ba","Type":"ContainerDied","Data":"c60e68b82575913344513d6bff29ebdb99846ffea71d4393df4f42f39dd2da7b"} Feb 18 20:01:24 crc kubenswrapper[5007]: I0218 20:01:24.506943 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f35fa3cf-13db-47c4-b7b0-d67dd97674ba","Type":"ContainerStarted","Data":"fe5990e23f05e1bd72653b9314c742e3cb86fd208feac67127f2c912fa19189a"} Feb 18 20:01:24 crc kubenswrapper[5007]: I0218 20:01:24.507635 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:01:24 crc kubenswrapper[5007]: I0218 20:01:24.510461 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c6a7259-f35d-452f-a26b-79be196180ce","Type":"ContainerStarted","Data":"4ad0de4c0b1a5c8c055ef56c52656632165898fca02cc2672ac575ff2d344434"} Feb 18 20:01:24 crc kubenswrapper[5007]: I0218 20:01:24.510725 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 20:01:24 crc kubenswrapper[5007]: I0218 20:01:24.536810 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.536786632 podStartE2EDuration="36.536786632s" podCreationTimestamp="2026-02-18 20:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:24.531657845 +0000 UTC m=+1369.313777449" watchObservedRunningTime="2026-02-18 20:01:24.536786632 +0000 UTC m=+1369.318906156" Feb 18 20:01:24 crc kubenswrapper[5007]: I0218 20:01:24.580714 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.580695283 podStartE2EDuration="36.580695283s" podCreationTimestamp="2026-02-18 20:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:24.570957494 +0000 UTC m=+1369.353077028" watchObservedRunningTime="2026-02-18 20:01:24.580695283 +0000 UTC m=+1369.362814807" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.918456 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs"] Feb 18 20:01:34 crc kubenswrapper[5007]: E0218 20:01:34.920312 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de972c5a-b715-4927-a929-4a36ac39ae34" containerName="dnsmasq-dns" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.920336 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de972c5a-b715-4927-a929-4a36ac39ae34" containerName="dnsmasq-dns" Feb 18 20:01:34 crc kubenswrapper[5007]: E0218 20:01:34.920350 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3db05b5-5b48-4118-963d-86e22b4e78bc" containerName="init" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.920361 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3db05b5-5b48-4118-963d-86e22b4e78bc" containerName="init" Feb 18 20:01:34 crc kubenswrapper[5007]: E0218 20:01:34.920391 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de972c5a-b715-4927-a929-4a36ac39ae34" containerName="init" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.920400 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="de972c5a-b715-4927-a929-4a36ac39ae34" containerName="init" Feb 18 20:01:34 crc kubenswrapper[5007]: E0218 20:01:34.920476 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3db05b5-5b48-4118-963d-86e22b4e78bc" containerName="dnsmasq-dns" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.920487 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3db05b5-5b48-4118-963d-86e22b4e78bc" containerName="dnsmasq-dns" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.921068 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3db05b5-5b48-4118-963d-86e22b4e78bc" containerName="dnsmasq-dns" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.921129 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="de972c5a-b715-4927-a929-4a36ac39ae34" containerName="dnsmasq-dns" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.922509 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.927539 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.927998 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.929605 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.929763 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:01:34 crc kubenswrapper[5007]: I0218 20:01:34.958753 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs"] Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.037585 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.037665 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.037688 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmmmj\" (UniqueName: \"kubernetes.io/projected/0c860548-09b3-46c7-b55b-f6254842d08e-kube-api-access-dmmmj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.037748 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.141198 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.141302 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.141335 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmmmj\" (UniqueName: \"kubernetes.io/projected/0c860548-09b3-46c7-b55b-f6254842d08e-kube-api-access-dmmmj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.141393 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.150393 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.152165 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.153795 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.165235 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmmmj\" (UniqueName: \"kubernetes.io/projected/0c860548-09b3-46c7-b55b-f6254842d08e-kube-api-access-dmmmj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.261153 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:35 crc kubenswrapper[5007]: W0218 20:01:35.961008 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c860548_09b3_46c7_b55b_f6254842d08e.slice/crio-7d2b00701d6e04ca412483b3f3cfd9c8c8c8f31648d8a9c3ca8e4a728df88271 WatchSource:0}: Error finding container 7d2b00701d6e04ca412483b3f3cfd9c8c8c8f31648d8a9c3ca8e4a728df88271: Status 404 returned error can't find the container with id 7d2b00701d6e04ca412483b3f3cfd9c8c8c8f31648d8a9c3ca8e4a728df88271 Feb 18 20:01:35 crc kubenswrapper[5007]: I0218 20:01:35.970176 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs"] Feb 18 20:01:36 crc kubenswrapper[5007]: I0218 20:01:36.629510 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" event={"ID":"0c860548-09b3-46c7-b55b-f6254842d08e","Type":"ContainerStarted","Data":"7d2b00701d6e04ca412483b3f3cfd9c8c8c8f31648d8a9c3ca8e4a728df88271"} Feb 18 20:01:38 crc kubenswrapper[5007]: I0218 20:01:38.486537 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 20:01:38 crc kubenswrapper[5007]: I0218 20:01:38.805621 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 20:01:45 crc kubenswrapper[5007]: I0218 20:01:45.663583 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:01:45 crc kubenswrapper[5007]: I0218 20:01:45.664206 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:01:45 crc kubenswrapper[5007]: I0218 20:01:45.664258 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:01:45 crc kubenswrapper[5007]: I0218 20:01:45.665248 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fca2ab9c592f01755a09509912ed2a7ab21cefa3453d87cc11d33a8e4e411bb"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:01:45 crc kubenswrapper[5007]: I0218 20:01:45.665304 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://2fca2ab9c592f01755a09509912ed2a7ab21cefa3453d87cc11d33a8e4e411bb" gracePeriod=600 Feb 18 20:01:46 crc kubenswrapper[5007]: I0218 20:01:46.440087 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:01:46 crc kubenswrapper[5007]: I0218 20:01:46.753267 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="2fca2ab9c592f01755a09509912ed2a7ab21cefa3453d87cc11d33a8e4e411bb" exitCode=0 Feb 18 20:01:46 crc kubenswrapper[5007]: I0218 20:01:46.753327 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"2fca2ab9c592f01755a09509912ed2a7ab21cefa3453d87cc11d33a8e4e411bb"} Feb 18 20:01:46 crc kubenswrapper[5007]: I0218 20:01:46.753385 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90"} Feb 18 20:01:46 crc kubenswrapper[5007]: I0218 20:01:46.753411 5007 scope.go:117] "RemoveContainer" containerID="735004cd6ccf501f03eefba61a70e42cc068351a26665fe459f03c839d84882e" Feb 18 20:01:46 crc kubenswrapper[5007]: I0218 20:01:46.755515 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" event={"ID":"0c860548-09b3-46c7-b55b-f6254842d08e","Type":"ContainerStarted","Data":"f1846f6b4ada60850dbf47c018c635ba7ce80a41200eee76d1dd028a18cec411"} Feb 18 20:01:46 crc kubenswrapper[5007]: I0218 20:01:46.802505 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" podStartSLOduration=2.329374127 podStartE2EDuration="12.802483457s" podCreationTimestamp="2026-02-18 20:01:34 +0000 UTC" firstStartedPulling="2026-02-18 20:01:35.964257891 +0000 UTC m=+1380.746377415" lastFinishedPulling="2026-02-18 20:01:46.437367221 +0000 UTC m=+1391.219486745" observedRunningTime="2026-02-18 20:01:46.788388113 +0000 UTC m=+1391.570507657" watchObservedRunningTime="2026-02-18 20:01:46.802483457 +0000 UTC m=+1391.584603011" Feb 18 20:01:51 crc kubenswrapper[5007]: I0218 20:01:51.701106 5007 scope.go:117] "RemoveContainer" containerID="a0907fd6b319d397ea4996e29f9b606ce47dcc2e2611116a8afff4bb17779685" Feb 18 20:01:51 crc kubenswrapper[5007]: I0218 20:01:51.763119 5007 scope.go:117] "RemoveContainer" containerID="d13bd835c7a75e0fd047866199f573f5ab6e93a40365b1ee95241dd3f9c41398" Feb 18 20:01:51 crc kubenswrapper[5007]: I0218 20:01:51.790731 5007 scope.go:117] "RemoveContainer" containerID="8c7e5acd990d447d33c0f0612664ab1c135e2e2dbbfe7bef1f1b54619e1fbdbc" Feb 18 20:01:51 crc kubenswrapper[5007]: I0218 20:01:51.862167 5007 scope.go:117] "RemoveContainer" containerID="6de973d2441b7f690e0d0127fbae3dec6dfa065033778c59ead353fd491b33b0" Feb 18 20:01:57 crc kubenswrapper[5007]: I0218 20:01:57.894896 5007 generic.go:334] "Generic (PLEG): container finished" podID="0c860548-09b3-46c7-b55b-f6254842d08e" containerID="f1846f6b4ada60850dbf47c018c635ba7ce80a41200eee76d1dd028a18cec411" exitCode=0 Feb 18 20:01:57 crc kubenswrapper[5007]: I0218 20:01:57.894981 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" event={"ID":"0c860548-09b3-46c7-b55b-f6254842d08e","Type":"ContainerDied","Data":"f1846f6b4ada60850dbf47c018c635ba7ce80a41200eee76d1dd028a18cec411"} Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.501863 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.641146 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-ssh-key-openstack-edpm-ipam\") pod \"0c860548-09b3-46c7-b55b-f6254842d08e\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.641361 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-repo-setup-combined-ca-bundle\") pod \"0c860548-09b3-46c7-b55b-f6254842d08e\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.641542 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmmmj\" (UniqueName: \"kubernetes.io/projected/0c860548-09b3-46c7-b55b-f6254842d08e-kube-api-access-dmmmj\") pod \"0c860548-09b3-46c7-b55b-f6254842d08e\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.641684 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-inventory\") pod \"0c860548-09b3-46c7-b55b-f6254842d08e\" (UID: \"0c860548-09b3-46c7-b55b-f6254842d08e\") " Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.646762 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c860548-09b3-46c7-b55b-f6254842d08e-kube-api-access-dmmmj" (OuterVolumeSpecName: "kube-api-access-dmmmj") pod "0c860548-09b3-46c7-b55b-f6254842d08e" (UID: "0c860548-09b3-46c7-b55b-f6254842d08e"). InnerVolumeSpecName "kube-api-access-dmmmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.651216 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0c860548-09b3-46c7-b55b-f6254842d08e" (UID: "0c860548-09b3-46c7-b55b-f6254842d08e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.668962 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-inventory" (OuterVolumeSpecName: "inventory") pod "0c860548-09b3-46c7-b55b-f6254842d08e" (UID: "0c860548-09b3-46c7-b55b-f6254842d08e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.676127 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c860548-09b3-46c7-b55b-f6254842d08e" (UID: "0c860548-09b3-46c7-b55b-f6254842d08e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.744374 5007 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.744410 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmmmj\" (UniqueName: \"kubernetes.io/projected/0c860548-09b3-46c7-b55b-f6254842d08e-kube-api-access-dmmmj\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.744420 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.744444 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c860548-09b3-46c7-b55b-f6254842d08e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.927516 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.928244 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-c7chs" event={"ID":"0c860548-09b3-46c7-b55b-f6254842d08e","Type":"ContainerDied","Data":"7d2b00701d6e04ca412483b3f3cfd9c8c8c8f31648d8a9c3ca8e4a728df88271"} Feb 18 20:01:59 crc kubenswrapper[5007]: I0218 20:01:59.928325 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d2b00701d6e04ca412483b3f3cfd9c8c8c8f31648d8a9c3ca8e4a728df88271" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.023286 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr"] Feb 18 20:02:00 crc kubenswrapper[5007]: E0218 20:02:00.023783 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c860548-09b3-46c7-b55b-f6254842d08e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.023804 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c860548-09b3-46c7-b55b-f6254842d08e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.024043 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c860548-09b3-46c7-b55b-f6254842d08e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.024791 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.027378 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.027730 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.027888 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.028019 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.035954 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr"] Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.165823 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.165913 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qcwf\" (UniqueName: \"kubernetes.io/projected/dd9ba747-4caf-430c-867e-09aa841ce2c6-kube-api-access-5qcwf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.166204 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.268359 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.268453 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.268476 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qcwf\" (UniqueName: \"kubernetes.io/projected/dd9ba747-4caf-430c-867e-09aa841ce2c6-kube-api-access-5qcwf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.272937 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.277742 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.285963 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qcwf\" (UniqueName: \"kubernetes.io/projected/dd9ba747-4caf-430c-867e-09aa841ce2c6-kube-api-access-5qcwf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5rrxr\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:00 crc kubenswrapper[5007]: I0218 20:02:00.381051 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:01 crc kubenswrapper[5007]: W0218 20:02:01.025811 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd9ba747_4caf_430c_867e_09aa841ce2c6.slice/crio-07c9ca07b73ce753ea46fa1a2bcd7d2b56889671b09e16bc9ab6f69234c36186 WatchSource:0}: Error finding container 07c9ca07b73ce753ea46fa1a2bcd7d2b56889671b09e16bc9ab6f69234c36186: Status 404 returned error can't find the container with id 07c9ca07b73ce753ea46fa1a2bcd7d2b56889671b09e16bc9ab6f69234c36186 Feb 18 20:02:01 crc kubenswrapper[5007]: I0218 20:02:01.030696 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr"] Feb 18 20:02:01 crc kubenswrapper[5007]: I0218 20:02:01.958906 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" event={"ID":"dd9ba747-4caf-430c-867e-09aa841ce2c6","Type":"ContainerStarted","Data":"5df03050bc29c4b8db879975c0c60e1fbb6d7912a3ba0a1fb17dd458d8c2adfc"} Feb 18 20:02:01 crc kubenswrapper[5007]: I0218 20:02:01.959292 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" event={"ID":"dd9ba747-4caf-430c-867e-09aa841ce2c6","Type":"ContainerStarted","Data":"07c9ca07b73ce753ea46fa1a2bcd7d2b56889671b09e16bc9ab6f69234c36186"} Feb 18 20:02:01 crc kubenswrapper[5007]: I0218 20:02:01.996194 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" podStartSLOduration=2.494579161 podStartE2EDuration="2.996177278s" podCreationTimestamp="2026-02-18 20:01:59 +0000 UTC" firstStartedPulling="2026-02-18 20:02:01.029514864 +0000 UTC m=+1405.811634408" lastFinishedPulling="2026-02-18 20:02:01.531112961 +0000 UTC m=+1406.313232525" observedRunningTime="2026-02-18 20:02:01.990700811 +0000 UTC m=+1406.772820355" watchObservedRunningTime="2026-02-18 20:02:01.996177278 +0000 UTC m=+1406.778296802" Feb 18 20:02:04 crc kubenswrapper[5007]: I0218 20:02:04.992966 5007 generic.go:334] "Generic (PLEG): container finished" podID="dd9ba747-4caf-430c-867e-09aa841ce2c6" containerID="5df03050bc29c4b8db879975c0c60e1fbb6d7912a3ba0a1fb17dd458d8c2adfc" exitCode=0 Feb 18 20:02:04 crc kubenswrapper[5007]: I0218 20:02:04.993049 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" event={"ID":"dd9ba747-4caf-430c-867e-09aa841ce2c6","Type":"ContainerDied","Data":"5df03050bc29c4b8db879975c0c60e1fbb6d7912a3ba0a1fb17dd458d8c2adfc"} Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.475872 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.509280 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qcwf\" (UniqueName: \"kubernetes.io/projected/dd9ba747-4caf-430c-867e-09aa841ce2c6-kube-api-access-5qcwf\") pod \"dd9ba747-4caf-430c-867e-09aa841ce2c6\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.509550 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-inventory\") pod \"dd9ba747-4caf-430c-867e-09aa841ce2c6\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.509686 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-ssh-key-openstack-edpm-ipam\") pod \"dd9ba747-4caf-430c-867e-09aa841ce2c6\" (UID: \"dd9ba747-4caf-430c-867e-09aa841ce2c6\") " Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.518708 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9ba747-4caf-430c-867e-09aa841ce2c6-kube-api-access-5qcwf" (OuterVolumeSpecName: "kube-api-access-5qcwf") pod "dd9ba747-4caf-430c-867e-09aa841ce2c6" (UID: "dd9ba747-4caf-430c-867e-09aa841ce2c6"). InnerVolumeSpecName "kube-api-access-5qcwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.547674 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-inventory" (OuterVolumeSpecName: "inventory") pod "dd9ba747-4caf-430c-867e-09aa841ce2c6" (UID: "dd9ba747-4caf-430c-867e-09aa841ce2c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.558357 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd9ba747-4caf-430c-867e-09aa841ce2c6" (UID: "dd9ba747-4caf-430c-867e-09aa841ce2c6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.612643 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.612697 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9ba747-4caf-430c-867e-09aa841ce2c6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:06 crc kubenswrapper[5007]: I0218 20:02:06.612722 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qcwf\" (UniqueName: \"kubernetes.io/projected/dd9ba747-4caf-430c-867e-09aa841ce2c6-kube-api-access-5qcwf\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.054327 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" event={"ID":"dd9ba747-4caf-430c-867e-09aa841ce2c6","Type":"ContainerDied","Data":"07c9ca07b73ce753ea46fa1a2bcd7d2b56889671b09e16bc9ab6f69234c36186"} Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.054395 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c9ca07b73ce753ea46fa1a2bcd7d2b56889671b09e16bc9ab6f69234c36186" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.054517 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5rrxr" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.118112 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4"] Feb 18 20:02:07 crc kubenswrapper[5007]: E0218 20:02:07.118709 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9ba747-4caf-430c-867e-09aa841ce2c6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.118736 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9ba747-4caf-430c-867e-09aa841ce2c6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.118971 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9ba747-4caf-430c-867e-09aa841ce2c6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.120292 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.122872 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.124020 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.124220 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.124356 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.135587 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4"] Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.234364 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.234458 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.234625 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvj7v\" (UniqueName: \"kubernetes.io/projected/29567b30-6e4c-4ee4-9f94-2f4d58123561-kube-api-access-tvj7v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.234692 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.336664 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvj7v\" (UniqueName: \"kubernetes.io/projected/29567b30-6e4c-4ee4-9f94-2f4d58123561-kube-api-access-tvj7v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.336752 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.336909 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.337213 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.343540 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.345110 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.346369 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.369712 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvj7v\" (UniqueName: \"kubernetes.io/projected/29567b30-6e4c-4ee4-9f94-2f4d58123561-kube-api-access-tvj7v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:07 crc kubenswrapper[5007]: I0218 20:02:07.451267 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:02:08 crc kubenswrapper[5007]: W0218 20:02:08.069981 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29567b30_6e4c_4ee4_9f94_2f4d58123561.slice/crio-7a32c1c7958428db3eb2d348f345c45008b06d11bff2e83783308e6f47c7fdb5 WatchSource:0}: Error finding container 7a32c1c7958428db3eb2d348f345c45008b06d11bff2e83783308e6f47c7fdb5: Status 404 returned error can't find the container with id 7a32c1c7958428db3eb2d348f345c45008b06d11bff2e83783308e6f47c7fdb5 Feb 18 20:02:08 crc kubenswrapper[5007]: I0218 20:02:08.080141 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4"] Feb 18 20:02:09 crc kubenswrapper[5007]: I0218 20:02:09.083738 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" event={"ID":"29567b30-6e4c-4ee4-9f94-2f4d58123561","Type":"ContainerStarted","Data":"dc1d8b5aa9cf602c9f5f3d721ed2bdf2c0a38448e98e53f8d5a13d34d3bbd74d"} Feb 18 20:02:09 crc kubenswrapper[5007]: I0218 20:02:09.084172 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" event={"ID":"29567b30-6e4c-4ee4-9f94-2f4d58123561","Type":"ContainerStarted","Data":"7a32c1c7958428db3eb2d348f345c45008b06d11bff2e83783308e6f47c7fdb5"} Feb 18 20:02:09 crc kubenswrapper[5007]: I0218 20:02:09.128540 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" podStartSLOduration=1.699211697 podStartE2EDuration="2.128513297s" podCreationTimestamp="2026-02-18 20:02:07 +0000 UTC" firstStartedPulling="2026-02-18 20:02:08.071878589 +0000 UTC m=+1412.853998143" lastFinishedPulling="2026-02-18 20:02:08.501180209 +0000 UTC m=+1413.283299743" observedRunningTime="2026-02-18 20:02:09.103239811 +0000 UTC m=+1413.885359365" watchObservedRunningTime="2026-02-18 20:02:09.128513297 +0000 UTC m=+1413.910632851" Feb 18 20:02:52 crc kubenswrapper[5007]: I0218 20:02:52.033099 5007 scope.go:117] "RemoveContainer" containerID="d089007680c6ecad73ea0412e763e1d4a6b3943d4af97d825102aac7e899c8a9" Feb 18 20:02:52 crc kubenswrapper[5007]: I0218 20:02:52.054933 5007 scope.go:117] "RemoveContainer" containerID="c8e5d89dd44e0b85f2e9f33e90d578d1171e1c5b57dbae8b4eb6f1eda88cf1cf" Feb 18 20:02:52 crc kubenswrapper[5007]: I0218 20:02:52.104145 5007 scope.go:117] "RemoveContainer" containerID="0e51a2389cfcb3682abb26b5e33e0a9c720b90682c95d69cc66287d3a0c5857a" Feb 18 20:03:52 crc kubenswrapper[5007]: I0218 20:03:52.235733 5007 scope.go:117] "RemoveContainer" containerID="9c7e4237a51292f61a672df7abb7d7cfbf610914c90c30ab352313745a46ba00" Feb 18 20:03:52 crc kubenswrapper[5007]: I0218 20:03:52.268440 5007 scope.go:117] "RemoveContainer" containerID="a68823f2a6faacffdbdb7a75a4632b3b886cf6c2ecd8136a7b40e4ac34d70398" Feb 18 20:04:15 crc kubenswrapper[5007]: I0218 20:04:15.664286 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:04:15 crc kubenswrapper[5007]: I0218 20:04:15.664813 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:04:18 crc kubenswrapper[5007]: I0218 20:04:18.945292 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xjd57"] Feb 18 20:04:18 crc kubenswrapper[5007]: I0218 20:04:18.948138 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:18 crc kubenswrapper[5007]: I0218 20:04:18.977600 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xjd57"] Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.055545 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-catalog-content\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.055649 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr4j5\" (UniqueName: \"kubernetes.io/projected/6150d882-eff1-456d-9925-5d8970d93cf9-kube-api-access-vr4j5\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.055703 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-utilities\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.157418 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr4j5\" (UniqueName: \"kubernetes.io/projected/6150d882-eff1-456d-9925-5d8970d93cf9-kube-api-access-vr4j5\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.157495 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-utilities\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.157598 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-catalog-content\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.157983 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-catalog-content\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.158406 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-utilities\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.177563 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr4j5\" (UniqueName: \"kubernetes.io/projected/6150d882-eff1-456d-9925-5d8970d93cf9-kube-api-access-vr4j5\") pod \"redhat-operators-xjd57\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.274391 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:19 crc kubenswrapper[5007]: I0218 20:04:19.754098 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xjd57"] Feb 18 20:04:20 crc kubenswrapper[5007]: I0218 20:04:20.697738 5007 generic.go:334] "Generic (PLEG): container finished" podID="6150d882-eff1-456d-9925-5d8970d93cf9" containerID="e22829db68587d1596368f6bc3d2910f766f4505dc09a82c7b2ff98cbfe82e25" exitCode=0 Feb 18 20:04:20 crc kubenswrapper[5007]: I0218 20:04:20.697846 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjd57" event={"ID":"6150d882-eff1-456d-9925-5d8970d93cf9","Type":"ContainerDied","Data":"e22829db68587d1596368f6bc3d2910f766f4505dc09a82c7b2ff98cbfe82e25"} Feb 18 20:04:20 crc kubenswrapper[5007]: I0218 20:04:20.698059 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjd57" event={"ID":"6150d882-eff1-456d-9925-5d8970d93cf9","Type":"ContainerStarted","Data":"d1e17d6576397efc22e88031fd3b72b0dd439c623b4d7767653f862e3704c5d5"} Feb 18 20:04:20 crc kubenswrapper[5007]: I0218 20:04:20.699514 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:04:22 crc kubenswrapper[5007]: I0218 20:04:22.719772 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjd57" event={"ID":"6150d882-eff1-456d-9925-5d8970d93cf9","Type":"ContainerStarted","Data":"4b56816ac2e2cb241d22f800f46d8cca22ee4ceae7bf19da5cbe40b40e16b572"} Feb 18 20:04:24 crc kubenswrapper[5007]: I0218 20:04:24.743236 5007 generic.go:334] "Generic (PLEG): container finished" podID="6150d882-eff1-456d-9925-5d8970d93cf9" containerID="4b56816ac2e2cb241d22f800f46d8cca22ee4ceae7bf19da5cbe40b40e16b572" exitCode=0 Feb 18 20:04:24 crc kubenswrapper[5007]: I0218 20:04:24.743371 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjd57" event={"ID":"6150d882-eff1-456d-9925-5d8970d93cf9","Type":"ContainerDied","Data":"4b56816ac2e2cb241d22f800f46d8cca22ee4ceae7bf19da5cbe40b40e16b572"} Feb 18 20:04:25 crc kubenswrapper[5007]: I0218 20:04:25.756631 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjd57" event={"ID":"6150d882-eff1-456d-9925-5d8970d93cf9","Type":"ContainerStarted","Data":"a69a4fbe4846f55c06ac9d8783f7228c7b88368f7a41b48abc75390ea8cdcd36"} Feb 18 20:04:25 crc kubenswrapper[5007]: I0218 20:04:25.779738 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xjd57" podStartSLOduration=3.186303784 podStartE2EDuration="7.779685487s" podCreationTimestamp="2026-02-18 20:04:18 +0000 UTC" firstStartedPulling="2026-02-18 20:04:20.699213262 +0000 UTC m=+1545.481332786" lastFinishedPulling="2026-02-18 20:04:25.292594965 +0000 UTC m=+1550.074714489" observedRunningTime="2026-02-18 20:04:25.774527831 +0000 UTC m=+1550.556647385" watchObservedRunningTime="2026-02-18 20:04:25.779685487 +0000 UTC m=+1550.561805031" Feb 18 20:04:29 crc kubenswrapper[5007]: I0218 20:04:29.275297 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:29 crc kubenswrapper[5007]: I0218 20:04:29.275861 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:30 crc kubenswrapper[5007]: I0218 20:04:30.325574 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xjd57" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="registry-server" probeResult="failure" output=< Feb 18 20:04:30 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 20:04:30 crc kubenswrapper[5007]: > Feb 18 20:04:36 crc kubenswrapper[5007]: I0218 20:04:36.852080 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8nxh"] Feb 18 20:04:36 crc kubenswrapper[5007]: I0218 20:04:36.855947 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:36 crc kubenswrapper[5007]: I0218 20:04:36.877119 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8nxh"] Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.050511 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdk6x\" (UniqueName: \"kubernetes.io/projected/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-kube-api-access-sdk6x\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.050732 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-utilities\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.050988 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-catalog-content\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.153039 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-catalog-content\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.153478 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdk6x\" (UniqueName: \"kubernetes.io/projected/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-kube-api-access-sdk6x\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.153513 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-catalog-content\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.153697 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-utilities\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.154194 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-utilities\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.176299 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdk6x\" (UniqueName: \"kubernetes.io/projected/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-kube-api-access-sdk6x\") pod \"redhat-marketplace-q8nxh\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.182140 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.668304 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8nxh"] Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.881980 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8nxh" event={"ID":"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625","Type":"ContainerStarted","Data":"c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd"} Feb 18 20:04:37 crc kubenswrapper[5007]: I0218 20:04:37.882024 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8nxh" event={"ID":"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625","Type":"ContainerStarted","Data":"631552484a989500fecc6ee5cd5826cefb9d96c812fec2fe342909a1e3943fca"} Feb 18 20:04:38 crc kubenswrapper[5007]: I0218 20:04:38.901430 5007 generic.go:334] "Generic (PLEG): container finished" podID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerID="c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd" exitCode=0 Feb 18 20:04:38 crc kubenswrapper[5007]: I0218 20:04:38.901937 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8nxh" event={"ID":"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625","Type":"ContainerDied","Data":"c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd"} Feb 18 20:04:39 crc kubenswrapper[5007]: I0218 20:04:39.329409 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:39 crc kubenswrapper[5007]: I0218 20:04:39.389396 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:39 crc kubenswrapper[5007]: I0218 20:04:39.925174 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8nxh" event={"ID":"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625","Type":"ContainerStarted","Data":"cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278"} Feb 18 20:04:40 crc kubenswrapper[5007]: I0218 20:04:40.927290 5007 generic.go:334] "Generic (PLEG): container finished" podID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerID="cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278" exitCode=0 Feb 18 20:04:40 crc kubenswrapper[5007]: I0218 20:04:40.927358 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8nxh" event={"ID":"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625","Type":"ContainerDied","Data":"cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278"} Feb 18 20:04:41 crc kubenswrapper[5007]: I0218 20:04:41.614497 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xjd57"] Feb 18 20:04:41 crc kubenswrapper[5007]: I0218 20:04:41.615021 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xjd57" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="registry-server" containerID="cri-o://a69a4fbe4846f55c06ac9d8783f7228c7b88368f7a41b48abc75390ea8cdcd36" gracePeriod=2 Feb 18 20:04:41 crc kubenswrapper[5007]: I0218 20:04:41.938650 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8nxh" event={"ID":"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625","Type":"ContainerStarted","Data":"a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6"} Feb 18 20:04:41 crc kubenswrapper[5007]: I0218 20:04:41.942522 5007 generic.go:334] "Generic (PLEG): container finished" podID="6150d882-eff1-456d-9925-5d8970d93cf9" containerID="a69a4fbe4846f55c06ac9d8783f7228c7b88368f7a41b48abc75390ea8cdcd36" exitCode=0 Feb 18 20:04:41 crc kubenswrapper[5007]: I0218 20:04:41.942569 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjd57" event={"ID":"6150d882-eff1-456d-9925-5d8970d93cf9","Type":"ContainerDied","Data":"a69a4fbe4846f55c06ac9d8783f7228c7b88368f7a41b48abc75390ea8cdcd36"} Feb 18 20:04:41 crc kubenswrapper[5007]: I0218 20:04:41.959536 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8nxh" podStartSLOduration=3.531744557 podStartE2EDuration="5.959518995s" podCreationTimestamp="2026-02-18 20:04:36 +0000 UTC" firstStartedPulling="2026-02-18 20:04:38.905030382 +0000 UTC m=+1563.687149946" lastFinishedPulling="2026-02-18 20:04:41.33280486 +0000 UTC m=+1566.114924384" observedRunningTime="2026-02-18 20:04:41.958052854 +0000 UTC m=+1566.740172398" watchObservedRunningTime="2026-02-18 20:04:41.959518995 +0000 UTC m=+1566.741638519" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.136329 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.185542 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-utilities\") pod \"6150d882-eff1-456d-9925-5d8970d93cf9\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.185652 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-catalog-content\") pod \"6150d882-eff1-456d-9925-5d8970d93cf9\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.185814 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr4j5\" (UniqueName: \"kubernetes.io/projected/6150d882-eff1-456d-9925-5d8970d93cf9-kube-api-access-vr4j5\") pod \"6150d882-eff1-456d-9925-5d8970d93cf9\" (UID: \"6150d882-eff1-456d-9925-5d8970d93cf9\") " Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.191008 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-utilities" (OuterVolumeSpecName: "utilities") pod "6150d882-eff1-456d-9925-5d8970d93cf9" (UID: "6150d882-eff1-456d-9925-5d8970d93cf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.198995 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6150d882-eff1-456d-9925-5d8970d93cf9-kube-api-access-vr4j5" (OuterVolumeSpecName: "kube-api-access-vr4j5") pod "6150d882-eff1-456d-9925-5d8970d93cf9" (UID: "6150d882-eff1-456d-9925-5d8970d93cf9"). InnerVolumeSpecName "kube-api-access-vr4j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.288973 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr4j5\" (UniqueName: \"kubernetes.io/projected/6150d882-eff1-456d-9925-5d8970d93cf9-kube-api-access-vr4j5\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.289008 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.322073 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6150d882-eff1-456d-9925-5d8970d93cf9" (UID: "6150d882-eff1-456d-9925-5d8970d93cf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.390771 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6150d882-eff1-456d-9925-5d8970d93cf9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.953359 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjd57" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.955629 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjd57" event={"ID":"6150d882-eff1-456d-9925-5d8970d93cf9","Type":"ContainerDied","Data":"d1e17d6576397efc22e88031fd3b72b0dd439c623b4d7767653f862e3704c5d5"} Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.955700 5007 scope.go:117] "RemoveContainer" containerID="a69a4fbe4846f55c06ac9d8783f7228c7b88368f7a41b48abc75390ea8cdcd36" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.980939 5007 scope.go:117] "RemoveContainer" containerID="4b56816ac2e2cb241d22f800f46d8cca22ee4ceae7bf19da5cbe40b40e16b572" Feb 18 20:04:42 crc kubenswrapper[5007]: I0218 20:04:42.999573 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xjd57"] Feb 18 20:04:43 crc kubenswrapper[5007]: I0218 20:04:43.008425 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xjd57"] Feb 18 20:04:43 crc kubenswrapper[5007]: I0218 20:04:43.039227 5007 scope.go:117] "RemoveContainer" containerID="e22829db68587d1596368f6bc3d2910f766f4505dc09a82c7b2ff98cbfe82e25" Feb 18 20:04:43 crc kubenswrapper[5007]: I0218 20:04:43.926037 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" path="/var/lib/kubelet/pods/6150d882-eff1-456d-9925-5d8970d93cf9/volumes" Feb 18 20:04:45 crc kubenswrapper[5007]: I0218 20:04:45.663501 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:04:45 crc kubenswrapper[5007]: I0218 20:04:45.663788 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:04:47 crc kubenswrapper[5007]: I0218 20:04:47.182784 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:47 crc kubenswrapper[5007]: I0218 20:04:47.184201 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:47 crc kubenswrapper[5007]: I0218 20:04:47.273728 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:48 crc kubenswrapper[5007]: I0218 20:04:48.089855 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:48 crc kubenswrapper[5007]: I0218 20:04:48.150599 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8nxh"] Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.027027 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q8nxh" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerName="registry-server" containerID="cri-o://a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6" gracePeriod=2 Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.508705 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.658099 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdk6x\" (UniqueName: \"kubernetes.io/projected/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-kube-api-access-sdk6x\") pod \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.658175 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-utilities\") pod \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.658301 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-catalog-content\") pod \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\" (UID: \"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625\") " Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.661016 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-utilities" (OuterVolumeSpecName: "utilities") pod "396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" (UID: "396bfe2a-80f9-4d7c-8534-a7b0cd8a8625"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.669509 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-kube-api-access-sdk6x" (OuterVolumeSpecName: "kube-api-access-sdk6x") pod "396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" (UID: "396bfe2a-80f9-4d7c-8534-a7b0cd8a8625"). InnerVolumeSpecName "kube-api-access-sdk6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.685727 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" (UID: "396bfe2a-80f9-4d7c-8534-a7b0cd8a8625"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.761326 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.761361 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:50 crc kubenswrapper[5007]: I0218 20:04:50.761372 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdk6x\" (UniqueName: \"kubernetes.io/projected/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625-kube-api-access-sdk6x\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.048513 5007 generic.go:334] "Generic (PLEG): container finished" podID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerID="a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6" exitCode=0 Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.048586 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8nxh" event={"ID":"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625","Type":"ContainerDied","Data":"a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6"} Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.048646 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8nxh" event={"ID":"396bfe2a-80f9-4d7c-8534-a7b0cd8a8625","Type":"ContainerDied","Data":"631552484a989500fecc6ee5cd5826cefb9d96c812fec2fe342909a1e3943fca"} Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.048663 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8nxh" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.048684 5007 scope.go:117] "RemoveContainer" containerID="a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.087002 5007 scope.go:117] "RemoveContainer" containerID="cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.136779 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8nxh"] Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.141889 5007 scope.go:117] "RemoveContainer" containerID="c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.148254 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8nxh"] Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.170507 5007 scope.go:117] "RemoveContainer" containerID="a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6" Feb 18 20:04:51 crc kubenswrapper[5007]: E0218 20:04:51.171470 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6\": container with ID starting with a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6 not found: ID does not exist" containerID="a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.171529 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6"} err="failed to get container status \"a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6\": rpc error: code = NotFound desc = could not find container \"a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6\": container with ID starting with a0d15335962d16f7e49edb5bb1cd26d24d2fe5545d17800b3ecc12ca5653e9d6 not found: ID does not exist" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.171566 5007 scope.go:117] "RemoveContainer" containerID="cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278" Feb 18 20:04:51 crc kubenswrapper[5007]: E0218 20:04:51.171960 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278\": container with ID starting with cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278 not found: ID does not exist" containerID="cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.172012 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278"} err="failed to get container status \"cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278\": rpc error: code = NotFound desc = could not find container \"cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278\": container with ID starting with cb5c7f3e98c4e221ca313ac45a09082002d0b1376905099877d918eb2a75d278 not found: ID does not exist" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.172036 5007 scope.go:117] "RemoveContainer" containerID="c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd" Feb 18 20:04:51 crc kubenswrapper[5007]: E0218 20:04:51.172359 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd\": container with ID starting with c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd not found: ID does not exist" containerID="c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.172386 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd"} err="failed to get container status \"c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd\": rpc error: code = NotFound desc = could not find container \"c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd\": container with ID starting with c546cf05e0d4017b578a8442bb7e3fd0b9de752902692423a67f262121cdc3bd not found: ID does not exist" Feb 18 20:04:51 crc kubenswrapper[5007]: I0218 20:04:51.919993 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" path="/var/lib/kubelet/pods/396bfe2a-80f9-4d7c-8534-a7b0cd8a8625/volumes" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.466956 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4tqgc"] Feb 18 20:04:58 crc kubenswrapper[5007]: E0218 20:04:58.467965 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerName="extract-content" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.467979 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerName="extract-content" Feb 18 20:04:58 crc kubenswrapper[5007]: E0218 20:04:58.467991 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="extract-utilities" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.467998 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="extract-utilities" Feb 18 20:04:58 crc kubenswrapper[5007]: E0218 20:04:58.468011 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerName="registry-server" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.468018 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerName="registry-server" Feb 18 20:04:58 crc kubenswrapper[5007]: E0218 20:04:58.468035 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="extract-content" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.468041 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="extract-content" Feb 18 20:04:58 crc kubenswrapper[5007]: E0218 20:04:58.468051 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerName="extract-utilities" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.468057 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerName="extract-utilities" Feb 18 20:04:58 crc kubenswrapper[5007]: E0218 20:04:58.468068 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="registry-server" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.468073 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="registry-server" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.468265 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="6150d882-eff1-456d-9925-5d8970d93cf9" containerName="registry-server" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.468281 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="396bfe2a-80f9-4d7c-8534-a7b0cd8a8625" containerName="registry-server" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.469848 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.497023 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tqgc"] Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.521954 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-catalog-content\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.522062 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9nj\" (UniqueName: \"kubernetes.io/projected/14737e74-a8a0-406c-acd1-34e14e6c7363-kube-api-access-4k9nj\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.522299 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-utilities\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.624254 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-utilities\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.624587 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-catalog-content\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.624676 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9nj\" (UniqueName: \"kubernetes.io/projected/14737e74-a8a0-406c-acd1-34e14e6c7363-kube-api-access-4k9nj\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.624717 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-utilities\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.625062 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-catalog-content\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.643651 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9nj\" (UniqueName: \"kubernetes.io/projected/14737e74-a8a0-406c-acd1-34e14e6c7363-kube-api-access-4k9nj\") pod \"certified-operators-4tqgc\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:58 crc kubenswrapper[5007]: I0218 20:04:58.798584 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:04:59 crc kubenswrapper[5007]: I0218 20:04:59.341594 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tqgc"] Feb 18 20:04:59 crc kubenswrapper[5007]: W0218 20:04:59.357995 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14737e74_a8a0_406c_acd1_34e14e6c7363.slice/crio-e8184fb93c831ee013c43725ebdf022f1cabeb4e590a4908b8cb2fbe4f0bf529 WatchSource:0}: Error finding container e8184fb93c831ee013c43725ebdf022f1cabeb4e590a4908b8cb2fbe4f0bf529: Status 404 returned error can't find the container with id e8184fb93c831ee013c43725ebdf022f1cabeb4e590a4908b8cb2fbe4f0bf529 Feb 18 20:05:00 crc kubenswrapper[5007]: I0218 20:05:00.174405 5007 generic.go:334] "Generic (PLEG): container finished" podID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerID="8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6" exitCode=0 Feb 18 20:05:00 crc kubenswrapper[5007]: I0218 20:05:00.175591 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqgc" event={"ID":"14737e74-a8a0-406c-acd1-34e14e6c7363","Type":"ContainerDied","Data":"8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6"} Feb 18 20:05:00 crc kubenswrapper[5007]: I0218 20:05:00.175654 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqgc" event={"ID":"14737e74-a8a0-406c-acd1-34e14e6c7363","Type":"ContainerStarted","Data":"e8184fb93c831ee013c43725ebdf022f1cabeb4e590a4908b8cb2fbe4f0bf529"} Feb 18 20:05:02 crc kubenswrapper[5007]: I0218 20:05:02.197540 5007 generic.go:334] "Generic (PLEG): container finished" podID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerID="eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea" exitCode=0 Feb 18 20:05:02 crc kubenswrapper[5007]: I0218 20:05:02.197646 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqgc" event={"ID":"14737e74-a8a0-406c-acd1-34e14e6c7363","Type":"ContainerDied","Data":"eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea"} Feb 18 20:05:03 crc kubenswrapper[5007]: I0218 20:05:03.209705 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqgc" event={"ID":"14737e74-a8a0-406c-acd1-34e14e6c7363","Type":"ContainerStarted","Data":"ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc"} Feb 18 20:05:03 crc kubenswrapper[5007]: I0218 20:05:03.238446 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4tqgc" podStartSLOduration=2.813565658 podStartE2EDuration="5.238408794s" podCreationTimestamp="2026-02-18 20:04:58 +0000 UTC" firstStartedPulling="2026-02-18 20:05:00.177030515 +0000 UTC m=+1584.959150059" lastFinishedPulling="2026-02-18 20:05:02.601873651 +0000 UTC m=+1587.383993195" observedRunningTime="2026-02-18 20:05:03.232263329 +0000 UTC m=+1588.014382863" watchObservedRunningTime="2026-02-18 20:05:03.238408794 +0000 UTC m=+1588.020528308" Feb 18 20:05:08 crc kubenswrapper[5007]: I0218 20:05:08.799173 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:05:08 crc kubenswrapper[5007]: I0218 20:05:08.799897 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:05:08 crc kubenswrapper[5007]: I0218 20:05:08.854790 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:05:09 crc kubenswrapper[5007]: I0218 20:05:09.322212 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:05:09 crc kubenswrapper[5007]: I0218 20:05:09.384630 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tqgc"] Feb 18 20:05:11 crc kubenswrapper[5007]: I0218 20:05:11.295152 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4tqgc" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerName="registry-server" containerID="cri-o://ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc" gracePeriod=2 Feb 18 20:05:11 crc kubenswrapper[5007]: I0218 20:05:11.795645 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:05:11 crc kubenswrapper[5007]: I0218 20:05:11.902848 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k9nj\" (UniqueName: \"kubernetes.io/projected/14737e74-a8a0-406c-acd1-34e14e6c7363-kube-api-access-4k9nj\") pod \"14737e74-a8a0-406c-acd1-34e14e6c7363\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " Feb 18 20:05:11 crc kubenswrapper[5007]: I0218 20:05:11.902969 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-catalog-content\") pod \"14737e74-a8a0-406c-acd1-34e14e6c7363\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " Feb 18 20:05:11 crc kubenswrapper[5007]: I0218 20:05:11.903354 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-utilities\") pod \"14737e74-a8a0-406c-acd1-34e14e6c7363\" (UID: \"14737e74-a8a0-406c-acd1-34e14e6c7363\") " Feb 18 20:05:11 crc kubenswrapper[5007]: I0218 20:05:11.904501 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-utilities" (OuterVolumeSpecName: "utilities") pod "14737e74-a8a0-406c-acd1-34e14e6c7363" (UID: "14737e74-a8a0-406c-acd1-34e14e6c7363"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:05:11 crc kubenswrapper[5007]: I0218 20:05:11.908765 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14737e74-a8a0-406c-acd1-34e14e6c7363-kube-api-access-4k9nj" (OuterVolumeSpecName: "kube-api-access-4k9nj") pod "14737e74-a8a0-406c-acd1-34e14e6c7363" (UID: "14737e74-a8a0-406c-acd1-34e14e6c7363"). InnerVolumeSpecName "kube-api-access-4k9nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.005796 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k9nj\" (UniqueName: \"kubernetes.io/projected/14737e74-a8a0-406c-acd1-34e14e6c7363-kube-api-access-4k9nj\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.005832 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.310536 5007 generic.go:334] "Generic (PLEG): container finished" podID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerID="ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc" exitCode=0 Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.310583 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqgc" event={"ID":"14737e74-a8a0-406c-acd1-34e14e6c7363","Type":"ContainerDied","Data":"ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc"} Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.310656 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tqgc" event={"ID":"14737e74-a8a0-406c-acd1-34e14e6c7363","Type":"ContainerDied","Data":"e8184fb93c831ee013c43725ebdf022f1cabeb4e590a4908b8cb2fbe4f0bf529"} Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.310672 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tqgc" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.310683 5007 scope.go:117] "RemoveContainer" containerID="ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.352604 5007 scope.go:117] "RemoveContainer" containerID="eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.385485 5007 scope.go:117] "RemoveContainer" containerID="8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.427359 5007 scope.go:117] "RemoveContainer" containerID="ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc" Feb 18 20:05:12 crc kubenswrapper[5007]: E0218 20:05:12.427976 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc\": container with ID starting with ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc not found: ID does not exist" containerID="ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.428028 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc"} err="failed to get container status \"ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc\": rpc error: code = NotFound desc = could not find container \"ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc\": container with ID starting with ee5af6f1472f5df482a967ba48eabbd0a63c09c6d30142eb7a40bf6c8e17fcfc not found: ID does not exist" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.428124 5007 scope.go:117] "RemoveContainer" containerID="eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea" Feb 18 20:05:12 crc kubenswrapper[5007]: E0218 20:05:12.428585 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea\": container with ID starting with eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea not found: ID does not exist" containerID="eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.429007 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea"} err="failed to get container status \"eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea\": rpc error: code = NotFound desc = could not find container \"eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea\": container with ID starting with eca851b68d991d2349b8392e320070b27b92de1942087af7dbb02b98d0ccc9ea not found: ID does not exist" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.429762 5007 scope.go:117] "RemoveContainer" containerID="8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6" Feb 18 20:05:12 crc kubenswrapper[5007]: E0218 20:05:12.430204 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6\": container with ID starting with 8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6 not found: ID does not exist" containerID="8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.430269 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6"} err="failed to get container status \"8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6\": rpc error: code = NotFound desc = could not find container \"8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6\": container with ID starting with 8c5b52def9c9c6c7db3aae9d4e78d1d69cdf5e6ef92b9978f08977450e76e9b6 not found: ID does not exist" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.522720 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lbzxp"] Feb 18 20:05:12 crc kubenswrapper[5007]: E0218 20:05:12.523346 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerName="extract-content" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.523372 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerName="extract-content" Feb 18 20:05:12 crc kubenswrapper[5007]: E0218 20:05:12.523423 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerName="registry-server" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.523460 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerName="registry-server" Feb 18 20:05:12 crc kubenswrapper[5007]: E0218 20:05:12.523507 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerName="extract-utilities" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.523522 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerName="extract-utilities" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.523845 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" containerName="registry-server" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.526184 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.543124 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbzxp"] Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.620238 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-catalog-content\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.620312 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-utilities\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.620337 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhlg\" (UniqueName: \"kubernetes.io/projected/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-kube-api-access-9xhlg\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.722942 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-catalog-content\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.723423 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-utilities\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.723591 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhlg\" (UniqueName: \"kubernetes.io/projected/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-kube-api-access-9xhlg\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.723643 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-catalog-content\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.723938 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-utilities\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.742178 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhlg\" (UniqueName: \"kubernetes.io/projected/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-kube-api-access-9xhlg\") pod \"community-operators-lbzxp\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.793510 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14737e74-a8a0-406c-acd1-34e14e6c7363" (UID: "14737e74-a8a0-406c-acd1-34e14e6c7363"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.825915 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14737e74-a8a0-406c-acd1-34e14e6c7363-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:12 crc kubenswrapper[5007]: I0218 20:05:12.861764 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:13 crc kubenswrapper[5007]: I0218 20:05:13.045323 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tqgc"] Feb 18 20:05:13 crc kubenswrapper[5007]: I0218 20:05:13.054959 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4tqgc"] Feb 18 20:05:13 crc kubenswrapper[5007]: I0218 20:05:13.478054 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbzxp"] Feb 18 20:05:13 crc kubenswrapper[5007]: I0218 20:05:13.922013 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14737e74-a8a0-406c-acd1-34e14e6c7363" path="/var/lib/kubelet/pods/14737e74-a8a0-406c-acd1-34e14e6c7363/volumes" Feb 18 20:05:14 crc kubenswrapper[5007]: I0218 20:05:14.331260 5007 generic.go:334] "Generic (PLEG): container finished" podID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerID="d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223" exitCode=0 Feb 18 20:05:14 crc kubenswrapper[5007]: I0218 20:05:14.331318 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbzxp" event={"ID":"91ffd501-a21a-4de1-a82c-f8b928bcf7ef","Type":"ContainerDied","Data":"d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223"} Feb 18 20:05:14 crc kubenswrapper[5007]: I0218 20:05:14.331349 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbzxp" event={"ID":"91ffd501-a21a-4de1-a82c-f8b928bcf7ef","Type":"ContainerStarted","Data":"1075de2ecc92457ef5964113faa2618e807a575f20051cd48d14779bb7e23427"} Feb 18 20:05:15 crc kubenswrapper[5007]: I0218 20:05:15.347598 5007 generic.go:334] "Generic (PLEG): container finished" podID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerID="4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89" exitCode=0 Feb 18 20:05:15 crc kubenswrapper[5007]: I0218 20:05:15.347712 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbzxp" event={"ID":"91ffd501-a21a-4de1-a82c-f8b928bcf7ef","Type":"ContainerDied","Data":"4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89"} Feb 18 20:05:15 crc kubenswrapper[5007]: I0218 20:05:15.663992 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:05:15 crc kubenswrapper[5007]: I0218 20:05:15.664050 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:05:15 crc kubenswrapper[5007]: I0218 20:05:15.664102 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:05:15 crc kubenswrapper[5007]: I0218 20:05:15.664946 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:05:15 crc kubenswrapper[5007]: I0218 20:05:15.665014 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" gracePeriod=600 Feb 18 20:05:15 crc kubenswrapper[5007]: E0218 20:05:15.785956 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:05:16 crc kubenswrapper[5007]: I0218 20:05:16.363020 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" exitCode=0 Feb 18 20:05:16 crc kubenswrapper[5007]: I0218 20:05:16.363240 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90"} Feb 18 20:05:16 crc kubenswrapper[5007]: I0218 20:05:16.363350 5007 scope.go:117] "RemoveContainer" containerID="2fca2ab9c592f01755a09509912ed2a7ab21cefa3453d87cc11d33a8e4e411bb" Feb 18 20:05:16 crc kubenswrapper[5007]: I0218 20:05:16.364114 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:05:16 crc kubenswrapper[5007]: E0218 20:05:16.364517 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:05:16 crc kubenswrapper[5007]: I0218 20:05:16.366603 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbzxp" event={"ID":"91ffd501-a21a-4de1-a82c-f8b928bcf7ef","Type":"ContainerStarted","Data":"278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520"} Feb 18 20:05:16 crc kubenswrapper[5007]: I0218 20:05:16.409279 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lbzxp" podStartSLOduration=2.994784658 podStartE2EDuration="4.409260066s" podCreationTimestamp="2026-02-18 20:05:12 +0000 UTC" firstStartedPulling="2026-02-18 20:05:14.333529005 +0000 UTC m=+1599.115648539" lastFinishedPulling="2026-02-18 20:05:15.748004423 +0000 UTC m=+1600.530123947" observedRunningTime="2026-02-18 20:05:16.403879583 +0000 UTC m=+1601.185999107" watchObservedRunningTime="2026-02-18 20:05:16.409260066 +0000 UTC m=+1601.191379590" Feb 18 20:05:22 crc kubenswrapper[5007]: I0218 20:05:22.862849 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:22 crc kubenswrapper[5007]: I0218 20:05:22.863491 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:22 crc kubenswrapper[5007]: I0218 20:05:22.913955 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:23 crc kubenswrapper[5007]: I0218 20:05:23.511196 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:23 crc kubenswrapper[5007]: I0218 20:05:23.564908 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lbzxp"] Feb 18 20:05:25 crc kubenswrapper[5007]: I0218 20:05:25.047305 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2827-account-create-update-bnz5f"] Feb 18 20:05:25 crc kubenswrapper[5007]: I0218 20:05:25.059166 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ec0f-account-create-update-sgpb5"] Feb 18 20:05:25 crc kubenswrapper[5007]: I0218 20:05:25.069529 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2827-account-create-update-bnz5f"] Feb 18 20:05:25 crc kubenswrapper[5007]: I0218 20:05:25.079974 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ec0f-account-create-update-sgpb5"] Feb 18 20:05:25 crc kubenswrapper[5007]: I0218 20:05:25.492035 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lbzxp" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerName="registry-server" containerID="cri-o://278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520" gracePeriod=2 Feb 18 20:05:25 crc kubenswrapper[5007]: I0218 20:05:25.919940 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15bfff7-0338-47e8-bdd1-4ab3241829b0" path="/var/lib/kubelet/pods/d15bfff7-0338-47e8-bdd1-4ab3241829b0/volumes" Feb 18 20:05:25 crc kubenswrapper[5007]: I0218 20:05:25.920746 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d690d55d-b5a9-4066-ab9c-c8ddb23288b8" path="/var/lib/kubelet/pods/d690d55d-b5a9-4066-ab9c-c8ddb23288b8/volumes" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.053819 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-n5t86"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.065780 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gxqz9"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.074163 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-6136-account-create-update-d9mk4"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.083959 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gxqz9"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.092715 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-n5t86"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.100352 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-v9r5p"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.108315 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-6136-account-create-update-d9mk4"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.118032 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-v9r5p"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.485274 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.502943 5007 generic.go:334] "Generic (PLEG): container finished" podID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerID="278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520" exitCode=0 Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.502990 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbzxp" event={"ID":"91ffd501-a21a-4de1-a82c-f8b928bcf7ef","Type":"ContainerDied","Data":"278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520"} Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.503016 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbzxp" event={"ID":"91ffd501-a21a-4de1-a82c-f8b928bcf7ef","Type":"ContainerDied","Data":"1075de2ecc92457ef5964113faa2618e807a575f20051cd48d14779bb7e23427"} Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.503033 5007 scope.go:117] "RemoveContainer" containerID="278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.503166 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbzxp" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.529555 5007 scope.go:117] "RemoveContainer" containerID="4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.564175 5007 scope.go:117] "RemoveContainer" containerID="d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.603422 5007 scope.go:117] "RemoveContainer" containerID="278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520" Feb 18 20:05:26 crc kubenswrapper[5007]: E0218 20:05:26.603897 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520\": container with ID starting with 278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520 not found: ID does not exist" containerID="278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.603945 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520"} err="failed to get container status \"278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520\": rpc error: code = NotFound desc = could not find container \"278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520\": container with ID starting with 278de731d40b9e34f124c2484ecd3088cf0f0a48529ed081ef32eecb24559520 not found: ID does not exist" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.603966 5007 scope.go:117] "RemoveContainer" containerID="4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89" Feb 18 20:05:26 crc kubenswrapper[5007]: E0218 20:05:26.604215 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89\": container with ID starting with 4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89 not found: ID does not exist" containerID="4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.604252 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89"} err="failed to get container status \"4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89\": rpc error: code = NotFound desc = could not find container \"4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89\": container with ID starting with 4a3d3b72ec8c02d037a9db5298237b5bf275e5ff8500d29d35dec9130b289b89 not found: ID does not exist" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.604265 5007 scope.go:117] "RemoveContainer" containerID="d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223" Feb 18 20:05:26 crc kubenswrapper[5007]: E0218 20:05:26.604778 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223\": container with ID starting with d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223 not found: ID does not exist" containerID="d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.604796 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223"} err="failed to get container status \"d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223\": rpc error: code = NotFound desc = could not find container \"d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223\": container with ID starting with d6f6bb12e79a28e06e86a6dec0723f1580567db84e88fc5535978594c198d223 not found: ID does not exist" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.612911 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-catalog-content\") pod \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.613820 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-utilities\") pod \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.613889 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xhlg\" (UniqueName: \"kubernetes.io/projected/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-kube-api-access-9xhlg\") pod \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\" (UID: \"91ffd501-a21a-4de1-a82c-f8b928bcf7ef\") " Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.614761 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-utilities" (OuterVolumeSpecName: "utilities") pod "91ffd501-a21a-4de1-a82c-f8b928bcf7ef" (UID: "91ffd501-a21a-4de1-a82c-f8b928bcf7ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.619618 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-kube-api-access-9xhlg" (OuterVolumeSpecName: "kube-api-access-9xhlg") pod "91ffd501-a21a-4de1-a82c-f8b928bcf7ef" (UID: "91ffd501-a21a-4de1-a82c-f8b928bcf7ef"). InnerVolumeSpecName "kube-api-access-9xhlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.662954 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91ffd501-a21a-4de1-a82c-f8b928bcf7ef" (UID: "91ffd501-a21a-4de1-a82c-f8b928bcf7ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.716935 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.716982 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xhlg\" (UniqueName: \"kubernetes.io/projected/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-kube-api-access-9xhlg\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.716997 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ffd501-a21a-4de1-a82c-f8b928bcf7ef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.845857 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lbzxp"] Feb 18 20:05:26 crc kubenswrapper[5007]: I0218 20:05:26.904309 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lbzxp"] Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.028402 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ml6mz"] Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.036791 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ml6mz"] Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.909826 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:05:27 crc kubenswrapper[5007]: E0218 20:05:27.910336 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.924398 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08430e05-3044-4b57-bbda-98dbf36842eb" path="/var/lib/kubelet/pods/08430e05-3044-4b57-bbda-98dbf36842eb/volumes" Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.925168 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53910845-3e44-4200-8277-3e00ead09fc8" path="/var/lib/kubelet/pods/53910845-3e44-4200-8277-3e00ead09fc8/volumes" Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.925970 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804d707b-d8ff-4ae9-883d-2f89e84bc9bd" path="/var/lib/kubelet/pods/804d707b-d8ff-4ae9-883d-2f89e84bc9bd/volumes" Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.926746 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" path="/var/lib/kubelet/pods/91ffd501-a21a-4de1-a82c-f8b928bcf7ef/volumes" Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.928497 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0038747-eb9f-4f51-b252-52d9b594fe41" path="/var/lib/kubelet/pods/a0038747-eb9f-4f51-b252-52d9b594fe41/volumes" Feb 18 20:05:27 crc kubenswrapper[5007]: I0218 20:05:27.929262 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697" path="/var/lib/kubelet/pods/e8b1c5b4-dd3b-492e-9c6b-52e9bd83e697/volumes" Feb 18 20:05:29 crc kubenswrapper[5007]: I0218 20:05:29.031771 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a0ac-account-create-update-k24dv"] Feb 18 20:05:29 crc kubenswrapper[5007]: I0218 20:05:29.041876 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a0ac-account-create-update-k24dv"] Feb 18 20:05:29 crc kubenswrapper[5007]: I0218 20:05:29.922729 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdee3087-3a04-4f50-b310-26ac36c83aa7" path="/var/lib/kubelet/pods/fdee3087-3a04-4f50-b310-26ac36c83aa7/volumes" Feb 18 20:05:31 crc kubenswrapper[5007]: I0218 20:05:31.570566 5007 generic.go:334] "Generic (PLEG): container finished" podID="29567b30-6e4c-4ee4-9f94-2f4d58123561" containerID="dc1d8b5aa9cf602c9f5f3d721ed2bdf2c0a38448e98e53f8d5a13d34d3bbd74d" exitCode=0 Feb 18 20:05:31 crc kubenswrapper[5007]: I0218 20:05:31.570664 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" event={"ID":"29567b30-6e4c-4ee4-9f94-2f4d58123561","Type":"ContainerDied","Data":"dc1d8b5aa9cf602c9f5f3d721ed2bdf2c0a38448e98e53f8d5a13d34d3bbd74d"} Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.064870 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.147994 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-bootstrap-combined-ca-bundle\") pod \"29567b30-6e4c-4ee4-9f94-2f4d58123561\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.148188 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvj7v\" (UniqueName: \"kubernetes.io/projected/29567b30-6e4c-4ee4-9f94-2f4d58123561-kube-api-access-tvj7v\") pod \"29567b30-6e4c-4ee4-9f94-2f4d58123561\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.148238 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-ssh-key-openstack-edpm-ipam\") pod \"29567b30-6e4c-4ee4-9f94-2f4d58123561\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.148305 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-inventory\") pod \"29567b30-6e4c-4ee4-9f94-2f4d58123561\" (UID: \"29567b30-6e4c-4ee4-9f94-2f4d58123561\") " Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.166347 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "29567b30-6e4c-4ee4-9f94-2f4d58123561" (UID: "29567b30-6e4c-4ee4-9f94-2f4d58123561"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.170690 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29567b30-6e4c-4ee4-9f94-2f4d58123561-kube-api-access-tvj7v" (OuterVolumeSpecName: "kube-api-access-tvj7v") pod "29567b30-6e4c-4ee4-9f94-2f4d58123561" (UID: "29567b30-6e4c-4ee4-9f94-2f4d58123561"). InnerVolumeSpecName "kube-api-access-tvj7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.187740 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "29567b30-6e4c-4ee4-9f94-2f4d58123561" (UID: "29567b30-6e4c-4ee4-9f94-2f4d58123561"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.188316 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-inventory" (OuterVolumeSpecName: "inventory") pod "29567b30-6e4c-4ee4-9f94-2f4d58123561" (UID: "29567b30-6e4c-4ee4-9f94-2f4d58123561"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.250421 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvj7v\" (UniqueName: \"kubernetes.io/projected/29567b30-6e4c-4ee4-9f94-2f4d58123561-kube-api-access-tvj7v\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.250477 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.250488 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.250499 5007 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29567b30-6e4c-4ee4-9f94-2f4d58123561-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.597467 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" event={"ID":"29567b30-6e4c-4ee4-9f94-2f4d58123561","Type":"ContainerDied","Data":"7a32c1c7958428db3eb2d348f345c45008b06d11bff2e83783308e6f47c7fdb5"} Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.597514 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a32c1c7958428db3eb2d348f345c45008b06d11bff2e83783308e6f47c7fdb5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.597582 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w9gd4" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.699677 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5"] Feb 18 20:05:33 crc kubenswrapper[5007]: E0218 20:05:33.700241 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerName="extract-utilities" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.700272 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerName="extract-utilities" Feb 18 20:05:33 crc kubenswrapper[5007]: E0218 20:05:33.700292 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerName="extract-content" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.700303 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerName="extract-content" Feb 18 20:05:33 crc kubenswrapper[5007]: E0218 20:05:33.700315 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29567b30-6e4c-4ee4-9f94-2f4d58123561" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.700327 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="29567b30-6e4c-4ee4-9f94-2f4d58123561" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:33 crc kubenswrapper[5007]: E0218 20:05:33.700372 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerName="registry-server" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.700391 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerName="registry-server" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.700726 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="29567b30-6e4c-4ee4-9f94-2f4d58123561" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.700770 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ffd501-a21a-4de1-a82c-f8b928bcf7ef" containerName="registry-server" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.701662 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.704563 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.704907 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.705167 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.705172 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.718115 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5"] Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.759282 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.759330 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh5wj\" (UniqueName: \"kubernetes.io/projected/6022d402-0777-4263-b252-8f41fa7a8af6-kube-api-access-fh5wj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.759681 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.861967 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.862101 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh5wj\" (UniqueName: \"kubernetes.io/projected/6022d402-0777-4263-b252-8f41fa7a8af6-kube-api-access-fh5wj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.862363 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.869621 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.871551 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:33 crc kubenswrapper[5007]: I0218 20:05:33.883371 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh5wj\" (UniqueName: \"kubernetes.io/projected/6022d402-0777-4263-b252-8f41fa7a8af6-kube-api-access-fh5wj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:34 crc kubenswrapper[5007]: I0218 20:05:34.018529 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:05:34 crc kubenswrapper[5007]: I0218 20:05:34.728265 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5"] Feb 18 20:05:35 crc kubenswrapper[5007]: I0218 20:05:35.620342 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" event={"ID":"6022d402-0777-4263-b252-8f41fa7a8af6","Type":"ContainerStarted","Data":"5066169052b74cf058a875c2fca864a5e3cff42e691c1bee59736f42ee885eef"} Feb 18 20:05:35 crc kubenswrapper[5007]: I0218 20:05:35.620691 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" event={"ID":"6022d402-0777-4263-b252-8f41fa7a8af6","Type":"ContainerStarted","Data":"7d75e92141103b8192c5e64922dea3d18b23247b3b6c5b30e616db90ef1e64a6"} Feb 18 20:05:35 crc kubenswrapper[5007]: I0218 20:05:35.653687 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" podStartSLOduration=2.211433951 podStartE2EDuration="2.653666492s" podCreationTimestamp="2026-02-18 20:05:33 +0000 UTC" firstStartedPulling="2026-02-18 20:05:34.734810436 +0000 UTC m=+1619.516929960" lastFinishedPulling="2026-02-18 20:05:35.177042937 +0000 UTC m=+1619.959162501" observedRunningTime="2026-02-18 20:05:35.64616838 +0000 UTC m=+1620.428287904" watchObservedRunningTime="2026-02-18 20:05:35.653666492 +0000 UTC m=+1620.435786036" Feb 18 20:05:40 crc kubenswrapper[5007]: I0218 20:05:40.051694 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bc6bd"] Feb 18 20:05:40 crc kubenswrapper[5007]: I0218 20:05:40.062366 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bc6bd"] Feb 18 20:05:41 crc kubenswrapper[5007]: I0218 20:05:41.931087 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b" path="/var/lib/kubelet/pods/8bfbfda6-47bc-4ecf-bdb0-8fc6ad90237b/volumes" Feb 18 20:05:42 crc kubenswrapper[5007]: I0218 20:05:42.909925 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:05:42 crc kubenswrapper[5007]: E0218 20:05:42.910780 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.394985 5007 scope.go:117] "RemoveContainer" containerID="a05d673fbff11cc7234b18012d035f6bb1e674b0f0eafc2f7af2082ffae97e67" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.439560 5007 scope.go:117] "RemoveContainer" containerID="ea34ae493fec062986f372fa8b9b978212f9d7929c511f1ee113d9ceb94fdda2" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.499209 5007 scope.go:117] "RemoveContainer" containerID="0db9186cccff0e7d57ef5545887a0bd83b4c5a97851ba8efc545ca56a4e9bb25" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.563852 5007 scope.go:117] "RemoveContainer" containerID="50ba0e9a88e842147470f3ad82a36f19d8e4b93517b965706ff7298bff67660d" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.596974 5007 scope.go:117] "RemoveContainer" containerID="084761def3e3f9f96312615afb37ba8fd44a625616dc4c82afaafce1086c6fd3" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.640521 5007 scope.go:117] "RemoveContainer" containerID="934b25069cbab84ed0f4fc00310f2a2d12ce7d0b720b86b2e9f2b291484d56f4" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.690041 5007 scope.go:117] "RemoveContainer" containerID="861a6ebc79295f816e7451f9b86496e7ca01fd747eef5ce761fa0a91440a141e" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.721212 5007 scope.go:117] "RemoveContainer" containerID="964259f4349869ce443d8a17966f4a9967b3d422a54db3a76bb55321a160ffcc" Feb 18 20:05:52 crc kubenswrapper[5007]: I0218 20:05:52.752728 5007 scope.go:117] "RemoveContainer" containerID="2134c6cbc64db4fd062eb3639ac1ee56ecd91b0fc59fc2af441bf04bc3da4b7f" Feb 18 20:05:55 crc kubenswrapper[5007]: I0218 20:05:55.921001 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:05:55 crc kubenswrapper[5007]: E0218 20:05:55.921965 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.111761 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-btd2f"] Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.120950 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rf7bq"] Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.129006 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3afb-account-create-update-ckj4v"] Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.136741 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-btd2f"] Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.146035 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-s7llx"] Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.154257 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rf7bq"] Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.162301 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3afb-account-create-update-ckj4v"] Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.170545 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-s7llx"] Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.923715 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0510ca-f7a8-437b-9625-f6be614e61cb" path="/var/lib/kubelet/pods/1e0510ca-f7a8-437b-9625-f6be614e61cb/volumes" Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.925720 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65bb66ce-376a-4f9a-9202-1d122f4fd3a5" path="/var/lib/kubelet/pods/65bb66ce-376a-4f9a-9202-1d122f4fd3a5/volumes" Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.927134 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="923f04c2-f4bb-4ba8-8b06-50e4cda22028" path="/var/lib/kubelet/pods/923f04c2-f4bb-4ba8-8b06-50e4cda22028/volumes" Feb 18 20:06:01 crc kubenswrapper[5007]: I0218 20:06:01.928723 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eade00fd-e7cb-4213-bc67-a0c2cec0cc13" path="/var/lib/kubelet/pods/eade00fd-e7cb-4213-bc67-a0c2cec0cc13/volumes" Feb 18 20:06:04 crc kubenswrapper[5007]: I0218 20:06:04.057751 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-905c-account-create-update-nmdg8"] Feb 18 20:06:04 crc kubenswrapper[5007]: I0218 20:06:04.075383 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-48e7-account-create-update-sczjm"] Feb 18 20:06:04 crc kubenswrapper[5007]: I0218 20:06:04.086887 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-48e7-account-create-update-sczjm"] Feb 18 20:06:04 crc kubenswrapper[5007]: I0218 20:06:04.096714 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-905c-account-create-update-nmdg8"] Feb 18 20:06:05 crc kubenswrapper[5007]: I0218 20:06:05.923935 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="822282ba-58c6-4182-88e3-1f097b9d912b" path="/var/lib/kubelet/pods/822282ba-58c6-4182-88e3-1f097b9d912b/volumes" Feb 18 20:06:05 crc kubenswrapper[5007]: I0218 20:06:05.924959 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f87bd0d-a9a8-4009-a7f3-54429ed28348" path="/var/lib/kubelet/pods/8f87bd0d-a9a8-4009-a7f3-54429ed28348/volumes" Feb 18 20:06:07 crc kubenswrapper[5007]: I0218 20:06:07.909698 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:06:07 crc kubenswrapper[5007]: E0218 20:06:07.910209 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:06:09 crc kubenswrapper[5007]: I0218 20:06:09.035108 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9m4t7"] Feb 18 20:06:09 crc kubenswrapper[5007]: I0218 20:06:09.043753 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9m4t7"] Feb 18 20:06:09 crc kubenswrapper[5007]: I0218 20:06:09.930899 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54a67f1-3156-43d6-b428-62e305e3bc7b" path="/var/lib/kubelet/pods/e54a67f1-3156-43d6-b428-62e305e3bc7b/volumes" Feb 18 20:06:15 crc kubenswrapper[5007]: I0218 20:06:15.075018 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-kwq4c"] Feb 18 20:06:15 crc kubenswrapper[5007]: I0218 20:06:15.097656 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-kwq4c"] Feb 18 20:06:15 crc kubenswrapper[5007]: I0218 20:06:15.935005 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a90b81-177c-40e6-8b75-be61fe81187d" path="/var/lib/kubelet/pods/18a90b81-177c-40e6-8b75-be61fe81187d/volumes" Feb 18 20:06:16 crc kubenswrapper[5007]: I0218 20:06:16.048593 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xccwz"] Feb 18 20:06:16 crc kubenswrapper[5007]: I0218 20:06:16.066712 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xccwz"] Feb 18 20:06:17 crc kubenswrapper[5007]: I0218 20:06:17.932614 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e4fea3-ff28-4966-964d-31c16c2a5c13" path="/var/lib/kubelet/pods/c8e4fea3-ff28-4966-964d-31c16c2a5c13/volumes" Feb 18 20:06:21 crc kubenswrapper[5007]: I0218 20:06:21.910291 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:06:21 crc kubenswrapper[5007]: E0218 20:06:21.911099 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:06:36 crc kubenswrapper[5007]: I0218 20:06:36.909837 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:06:36 crc kubenswrapper[5007]: E0218 20:06:36.910878 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:06:48 crc kubenswrapper[5007]: I0218 20:06:48.052645 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-44gls"] Feb 18 20:06:48 crc kubenswrapper[5007]: I0218 20:06:48.065137 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-44gls"] Feb 18 20:06:48 crc kubenswrapper[5007]: I0218 20:06:48.910257 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:06:48 crc kubenswrapper[5007]: E0218 20:06:48.910879 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:06:49 crc kubenswrapper[5007]: I0218 20:06:49.921636 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b4964c0-2913-437d-b391-ed607a8fa813" path="/var/lib/kubelet/pods/8b4964c0-2913-437d-b391-ed607a8fa813/volumes" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.012668 5007 scope.go:117] "RemoveContainer" containerID="4ef39e138cf0d8196912720f3b0fb777fdd29750a558037e7cbe4fd632e155ae" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.045151 5007 scope.go:117] "RemoveContainer" containerID="fd1cbff3f36c02a66920b807f52ec8f8b53a6cb3059a14698ff226b8e63f673b" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.108619 5007 scope.go:117] "RemoveContainer" containerID="0d8ab64e64a4c01ac65126e1b229ef053c97b9cc5a443e89bbc4a2abbc22b424" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.178245 5007 scope.go:117] "RemoveContainer" containerID="6572d32875d3676d07b76a344e136840d919b7b85966dbaf5b83d2a8272d9e18" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.205092 5007 scope.go:117] "RemoveContainer" containerID="19597b0459187abe0165d2adad8c292f845eed1dad825cfc117fefb74ae900ed" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.251806 5007 scope.go:117] "RemoveContainer" containerID="28efe183548b02d593e64047fc3f898e736d3878c42a7d78e5ee141a8d663941" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.307533 5007 scope.go:117] "RemoveContainer" containerID="79598e84754a931ed805f0d30c9a1eaf7cc195ebf9c76595c65bdc3a437dbd07" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.347456 5007 scope.go:117] "RemoveContainer" containerID="cc2481f4017b2e290e14eb84787608ef55729eb3eb8170844ce53e955ad03b4d" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.368089 5007 scope.go:117] "RemoveContainer" containerID="6acfd56f85677a05c749f2698b52b51e300134e487c2901339a9b3d159fab2e7" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.385389 5007 scope.go:117] "RemoveContainer" containerID="4369676ad53621869bcf2b6e2de3f304f9039df8eaa5c01ca8ce7707a8806728" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.408860 5007 scope.go:117] "RemoveContainer" containerID="9275bada7258c13d9fe560a69089a147ef4edd7e105e36cea578daaf6d76b74e" Feb 18 20:06:53 crc kubenswrapper[5007]: I0218 20:06:53.432801 5007 scope.go:117] "RemoveContainer" containerID="9eaeb9f4a0583d11fad85d70694bf7942af7d38df7f7e3c7c5b61ff9aad57bd2" Feb 18 20:06:59 crc kubenswrapper[5007]: I0218 20:06:59.910312 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:06:59 crc kubenswrapper[5007]: E0218 20:06:59.911111 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:07:09 crc kubenswrapper[5007]: I0218 20:07:09.743741 5007 generic.go:334] "Generic (PLEG): container finished" podID="6022d402-0777-4263-b252-8f41fa7a8af6" containerID="5066169052b74cf058a875c2fca864a5e3cff42e691c1bee59736f42ee885eef" exitCode=0 Feb 18 20:07:09 crc kubenswrapper[5007]: I0218 20:07:09.744342 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" event={"ID":"6022d402-0777-4263-b252-8f41fa7a8af6","Type":"ContainerDied","Data":"5066169052b74cf058a875c2fca864a5e3cff42e691c1bee59736f42ee885eef"} Feb 18 20:07:10 crc kubenswrapper[5007]: I0218 20:07:10.040762 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-spkvd"] Feb 18 20:07:10 crc kubenswrapper[5007]: I0218 20:07:10.056456 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-spkvd"] Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.040313 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qz65k"] Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.075903 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zljlc"] Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.086727 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qz65k"] Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.097250 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zljlc"] Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.216450 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.392535 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-ssh-key-openstack-edpm-ipam\") pod \"6022d402-0777-4263-b252-8f41fa7a8af6\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.392664 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh5wj\" (UniqueName: \"kubernetes.io/projected/6022d402-0777-4263-b252-8f41fa7a8af6-kube-api-access-fh5wj\") pod \"6022d402-0777-4263-b252-8f41fa7a8af6\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.392846 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-inventory\") pod \"6022d402-0777-4263-b252-8f41fa7a8af6\" (UID: \"6022d402-0777-4263-b252-8f41fa7a8af6\") " Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.400690 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6022d402-0777-4263-b252-8f41fa7a8af6-kube-api-access-fh5wj" (OuterVolumeSpecName: "kube-api-access-fh5wj") pod "6022d402-0777-4263-b252-8f41fa7a8af6" (UID: "6022d402-0777-4263-b252-8f41fa7a8af6"). InnerVolumeSpecName "kube-api-access-fh5wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.421654 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6022d402-0777-4263-b252-8f41fa7a8af6" (UID: "6022d402-0777-4263-b252-8f41fa7a8af6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.432503 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-inventory" (OuterVolumeSpecName: "inventory") pod "6022d402-0777-4263-b252-8f41fa7a8af6" (UID: "6022d402-0777-4263-b252-8f41fa7a8af6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.494889 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.494931 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6022d402-0777-4263-b252-8f41fa7a8af6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.494946 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh5wj\" (UniqueName: \"kubernetes.io/projected/6022d402-0777-4263-b252-8f41fa7a8af6-kube-api-access-fh5wj\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.770319 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" event={"ID":"6022d402-0777-4263-b252-8f41fa7a8af6","Type":"ContainerDied","Data":"7d75e92141103b8192c5e64922dea3d18b23247b3b6c5b30e616db90ef1e64a6"} Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.770358 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d75e92141103b8192c5e64922dea3d18b23247b3b6c5b30e616db90ef1e64a6" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.770374 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-mq8n5" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.855225 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs"] Feb 18 20:07:11 crc kubenswrapper[5007]: E0218 20:07:11.855724 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6022d402-0777-4263-b252-8f41fa7a8af6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.855748 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6022d402-0777-4263-b252-8f41fa7a8af6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.855996 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="6022d402-0777-4263-b252-8f41fa7a8af6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.860013 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.862263 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.862552 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.862729 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.863080 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.881480 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs"] Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.924977 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f906c05-1123-4e20-9792-a4930e299212" path="/var/lib/kubelet/pods/2f906c05-1123-4e20-9792-a4930e299212/volumes" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.925837 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757421ca-4bfc-42d6-ae73-bb479f3b5975" path="/var/lib/kubelet/pods/757421ca-4bfc-42d6-ae73-bb479f3b5975/volumes" Feb 18 20:07:11 crc kubenswrapper[5007]: I0218 20:07:11.926684 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6b2577-67ac-4c20-8242-e9533bab124d" path="/var/lib/kubelet/pods/ae6b2577-67ac-4c20-8242-e9533bab124d/volumes" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.004640 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.005037 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntfp\" (UniqueName: \"kubernetes.io/projected/6bf1264c-d7b4-43ec-89d3-f966deddba7a-kube-api-access-5ntfp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.005159 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.107112 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ntfp\" (UniqueName: \"kubernetes.io/projected/6bf1264c-d7b4-43ec-89d3-f966deddba7a-kube-api-access-5ntfp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.107286 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.107343 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.111991 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.112528 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.128297 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ntfp\" (UniqueName: \"kubernetes.io/projected/6bf1264c-d7b4-43ec-89d3-f966deddba7a-kube-api-access-5ntfp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwscs\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.175074 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.714337 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs"] Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.779955 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" event={"ID":"6bf1264c-d7b4-43ec-89d3-f966deddba7a","Type":"ContainerStarted","Data":"8a1909af49b1a169ce5b07eac97a04c736383c9508dcb2174e722e7eb07419af"} Feb 18 20:07:12 crc kubenswrapper[5007]: I0218 20:07:12.910240 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:07:12 crc kubenswrapper[5007]: E0218 20:07:12.910819 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:07:13 crc kubenswrapper[5007]: I0218 20:07:13.798677 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" event={"ID":"6bf1264c-d7b4-43ec-89d3-f966deddba7a","Type":"ContainerStarted","Data":"6cc8703e6faae8dea66589d770fff75b1b759f82caafff4cd5fe0835374f9337"} Feb 18 20:07:13 crc kubenswrapper[5007]: I0218 20:07:13.836643 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" podStartSLOduration=2.4074383 podStartE2EDuration="2.836621452s" podCreationTimestamp="2026-02-18 20:07:11 +0000 UTC" firstStartedPulling="2026-02-18 20:07:12.724011911 +0000 UTC m=+1717.506131435" lastFinishedPulling="2026-02-18 20:07:13.153195063 +0000 UTC m=+1717.935314587" observedRunningTime="2026-02-18 20:07:13.821087112 +0000 UTC m=+1718.603206646" watchObservedRunningTime="2026-02-18 20:07:13.836621452 +0000 UTC m=+1718.618740996" Feb 18 20:07:26 crc kubenswrapper[5007]: I0218 20:07:26.055976 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-w57rb"] Feb 18 20:07:26 crc kubenswrapper[5007]: I0218 20:07:26.066172 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-w57rb"] Feb 18 20:07:26 crc kubenswrapper[5007]: I0218 20:07:26.909204 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:07:26 crc kubenswrapper[5007]: E0218 20:07:26.909464 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:07:27 crc kubenswrapper[5007]: I0218 20:07:27.922391 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60b1e07-e980-450b-bfb8-c0210edeae7d" path="/var/lib/kubelet/pods/c60b1e07-e980-450b-bfb8-c0210edeae7d/volumes" Feb 18 20:07:37 crc kubenswrapper[5007]: I0218 20:07:37.910879 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:07:37 crc kubenswrapper[5007]: E0218 20:07:37.912220 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:07:49 crc kubenswrapper[5007]: I0218 20:07:49.910050 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:07:49 crc kubenswrapper[5007]: E0218 20:07:49.910957 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:07:53 crc kubenswrapper[5007]: I0218 20:07:53.670105 5007 scope.go:117] "RemoveContainer" containerID="05e36c4228d1cc1c00380382b72699f81f8d6012e14cd2a670ff6e0f3de9e124" Feb 18 20:07:53 crc kubenswrapper[5007]: I0218 20:07:53.722266 5007 scope.go:117] "RemoveContainer" containerID="c6e74b32f049099f48f3116d2e3f2d557d1c920e4030fd13573301d9ed0493d6" Feb 18 20:07:53 crc kubenswrapper[5007]: I0218 20:07:53.775182 5007 scope.go:117] "RemoveContainer" containerID="5d64228614eef24af8844ba7bf031ea7c91d80443ad2383f876552e4df811ee5" Feb 18 20:07:53 crc kubenswrapper[5007]: I0218 20:07:53.844120 5007 scope.go:117] "RemoveContainer" containerID="197db4c397b15798e3dbbb0c31b09c8262bebec449fb2e861f0cf06517226316" Feb 18 20:07:53 crc kubenswrapper[5007]: I0218 20:07:53.897691 5007 scope.go:117] "RemoveContainer" containerID="6ba17ac0e6456e2667da2260eb0157dd5a5553fbb6cb89866ee3115645a49b06" Feb 18 20:07:53 crc kubenswrapper[5007]: I0218 20:07:53.934172 5007 scope.go:117] "RemoveContainer" containerID="78813c149a9a022e37e970d1ba71725aab4717fc7e06060aa49d9cd5f2554bb3" Feb 18 20:08:02 crc kubenswrapper[5007]: I0218 20:08:02.909583 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:08:02 crc kubenswrapper[5007]: E0218 20:08:02.911250 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:08:14 crc kubenswrapper[5007]: I0218 20:08:14.911268 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:08:14 crc kubenswrapper[5007]: E0218 20:08:14.914202 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:08:23 crc kubenswrapper[5007]: I0218 20:08:23.569143 5007 generic.go:334] "Generic (PLEG): container finished" podID="6bf1264c-d7b4-43ec-89d3-f966deddba7a" containerID="6cc8703e6faae8dea66589d770fff75b1b759f82caafff4cd5fe0835374f9337" exitCode=0 Feb 18 20:08:23 crc kubenswrapper[5007]: I0218 20:08:23.569243 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" event={"ID":"6bf1264c-d7b4-43ec-89d3-f966deddba7a","Type":"ContainerDied","Data":"6cc8703e6faae8dea66589d770fff75b1b759f82caafff4cd5fe0835374f9337"} Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.043742 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.058079 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ntfp\" (UniqueName: \"kubernetes.io/projected/6bf1264c-d7b4-43ec-89d3-f966deddba7a-kube-api-access-5ntfp\") pod \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.058291 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-ssh-key-openstack-edpm-ipam\") pod \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.058343 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-inventory\") pod \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\" (UID: \"6bf1264c-d7b4-43ec-89d3-f966deddba7a\") " Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.069608 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf1264c-d7b4-43ec-89d3-f966deddba7a-kube-api-access-5ntfp" (OuterVolumeSpecName: "kube-api-access-5ntfp") pod "6bf1264c-d7b4-43ec-89d3-f966deddba7a" (UID: "6bf1264c-d7b4-43ec-89d3-f966deddba7a"). InnerVolumeSpecName "kube-api-access-5ntfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.090253 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6bf1264c-d7b4-43ec-89d3-f966deddba7a" (UID: "6bf1264c-d7b4-43ec-89d3-f966deddba7a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.109964 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-inventory" (OuterVolumeSpecName: "inventory") pod "6bf1264c-d7b4-43ec-89d3-f966deddba7a" (UID: "6bf1264c-d7b4-43ec-89d3-f966deddba7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.169292 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ntfp\" (UniqueName: \"kubernetes.io/projected/6bf1264c-d7b4-43ec-89d3-f966deddba7a-kube-api-access-5ntfp\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.169335 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.169349 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bf1264c-d7b4-43ec-89d3-f966deddba7a-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.798693 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" event={"ID":"6bf1264c-d7b4-43ec-89d3-f966deddba7a","Type":"ContainerDied","Data":"8a1909af49b1a169ce5b07eac97a04c736383c9508dcb2174e722e7eb07419af"} Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.798738 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a1909af49b1a169ce5b07eac97a04c736383c9508dcb2174e722e7eb07419af" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.798806 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwscs" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.812707 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq"] Feb 18 20:08:25 crc kubenswrapper[5007]: E0218 20:08:25.813240 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf1264c-d7b4-43ec-89d3-f966deddba7a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.813261 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf1264c-d7b4-43ec-89d3-f966deddba7a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.813521 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf1264c-d7b4-43ec-89d3-f966deddba7a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.814385 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.816700 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.817351 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.817584 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.817725 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.839199 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq"] Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.881164 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdpkj\" (UniqueName: \"kubernetes.io/projected/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-kube-api-access-wdpkj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.881356 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.881622 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.919044 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:08:25 crc kubenswrapper[5007]: E0218 20:08:25.919530 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.983418 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.983941 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdpkj\" (UniqueName: \"kubernetes.io/projected/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-kube-api-access-wdpkj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.984244 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.988130 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:25 crc kubenswrapper[5007]: I0218 20:08:25.988402 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:26 crc kubenswrapper[5007]: I0218 20:08:26.004129 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdpkj\" (UniqueName: \"kubernetes.io/projected/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-kube-api-access-wdpkj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:26 crc kubenswrapper[5007]: I0218 20:08:26.148461 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:26 crc kubenswrapper[5007]: I0218 20:08:26.709283 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq"] Feb 18 20:08:26 crc kubenswrapper[5007]: I0218 20:08:26.809851 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" event={"ID":"7c11442a-86a2-42bb-abb6-d1391dd3ab5a","Type":"ContainerStarted","Data":"5261907b31d1fb111a8b81cba1d4f076cacae5f5c360590b8483875047a8bf99"} Feb 18 20:08:27 crc kubenswrapper[5007]: I0218 20:08:27.830040 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" event={"ID":"7c11442a-86a2-42bb-abb6-d1391dd3ab5a","Type":"ContainerStarted","Data":"675d560ff42fc3c3c235dc2e086f9f0a1f0383c4d9681208e96170d122fccadc"} Feb 18 20:08:27 crc kubenswrapper[5007]: I0218 20:08:27.852073 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" podStartSLOduration=2.3887217290000002 podStartE2EDuration="2.852054914s" podCreationTimestamp="2026-02-18 20:08:25 +0000 UTC" firstStartedPulling="2026-02-18 20:08:26.713559809 +0000 UTC m=+1791.495679333" lastFinishedPulling="2026-02-18 20:08:27.176892994 +0000 UTC m=+1791.959012518" observedRunningTime="2026-02-18 20:08:27.851278372 +0000 UTC m=+1792.633397936" watchObservedRunningTime="2026-02-18 20:08:27.852054914 +0000 UTC m=+1792.634174448" Feb 18 20:08:31 crc kubenswrapper[5007]: I0218 20:08:31.877527 5007 generic.go:334] "Generic (PLEG): container finished" podID="7c11442a-86a2-42bb-abb6-d1391dd3ab5a" containerID="675d560ff42fc3c3c235dc2e086f9f0a1f0383c4d9681208e96170d122fccadc" exitCode=0 Feb 18 20:08:31 crc kubenswrapper[5007]: I0218 20:08:31.877620 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" event={"ID":"7c11442a-86a2-42bb-abb6-d1391dd3ab5a","Type":"ContainerDied","Data":"675d560ff42fc3c3c235dc2e086f9f0a1f0383c4d9681208e96170d122fccadc"} Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.055547 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e5eb-account-create-update-bxzlj"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.064754 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e5eb-account-create-update-bxzlj"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.072745 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1941-account-create-update-wdgcl"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.095571 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c253-account-create-update-mg594"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.107238 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-68h4k"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.120390 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dgmfr"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.133570 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-6csdv"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.144545 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dgmfr"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.175494 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-6csdv"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.185500 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-68h4k"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.197486 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c253-account-create-update-mg594"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.202651 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1941-account-create-update-wdgcl"] Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.456882 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.647462 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-inventory\") pod \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.647633 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdpkj\" (UniqueName: \"kubernetes.io/projected/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-kube-api-access-wdpkj\") pod \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.647788 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-ssh-key-openstack-edpm-ipam\") pod \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\" (UID: \"7c11442a-86a2-42bb-abb6-d1391dd3ab5a\") " Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.657632 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-kube-api-access-wdpkj" (OuterVolumeSpecName: "kube-api-access-wdpkj") pod "7c11442a-86a2-42bb-abb6-d1391dd3ab5a" (UID: "7c11442a-86a2-42bb-abb6-d1391dd3ab5a"). InnerVolumeSpecName "kube-api-access-wdpkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.678741 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c11442a-86a2-42bb-abb6-d1391dd3ab5a" (UID: "7c11442a-86a2-42bb-abb6-d1391dd3ab5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.679819 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-inventory" (OuterVolumeSpecName: "inventory") pod "7c11442a-86a2-42bb-abb6-d1391dd3ab5a" (UID: "7c11442a-86a2-42bb-abb6-d1391dd3ab5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.750313 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.750364 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.750378 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdpkj\" (UniqueName: \"kubernetes.io/projected/7c11442a-86a2-42bb-abb6-d1391dd3ab5a-kube-api-access-wdpkj\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.900709 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" event={"ID":"7c11442a-86a2-42bb-abb6-d1391dd3ab5a","Type":"ContainerDied","Data":"5261907b31d1fb111a8b81cba1d4f076cacae5f5c360590b8483875047a8bf99"} Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.900775 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5261907b31d1fb111a8b81cba1d4f076cacae5f5c360590b8483875047a8bf99" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.900779 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mqsrq" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.925911 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14da8257-18f4-47ad-bf34-64c287afcd9c" path="/var/lib/kubelet/pods/14da8257-18f4-47ad-bf34-64c287afcd9c/volumes" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.926646 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f8f4417-ce9f-4152-bd07-8f2b3756bd9b" path="/var/lib/kubelet/pods/4f8f4417-ce9f-4152-bd07-8f2b3756bd9b/volumes" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.927323 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3c479f-9b46-4418-b5c3-60b63b6aaa39" path="/var/lib/kubelet/pods/6d3c479f-9b46-4418-b5c3-60b63b6aaa39/volumes" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.928018 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="910c61d8-825c-4f88-923b-69e3e0d20388" path="/var/lib/kubelet/pods/910c61d8-825c-4f88-923b-69e3e0d20388/volumes" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.929300 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c39b11-db3a-4cff-b4ca-bbb3a533065c" path="/var/lib/kubelet/pods/98c39b11-db3a-4cff-b4ca-bbb3a533065c/volumes" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.930017 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb89ad8-6102-4b15-beea-f81f5cd0640c" path="/var/lib/kubelet/pods/bcb89ad8-6102-4b15-beea-f81f5cd0640c/volumes" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.981197 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz"] Feb 18 20:08:33 crc kubenswrapper[5007]: E0218 20:08:33.981639 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c11442a-86a2-42bb-abb6-d1391dd3ab5a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.981659 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c11442a-86a2-42bb-abb6-d1391dd3ab5a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.981848 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c11442a-86a2-42bb-abb6-d1391dd3ab5a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.983861 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.987368 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.987728 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.987882 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.991569 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:08:33 crc kubenswrapper[5007]: I0218 20:08:33.997380 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz"] Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.057128 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2zkk\" (UniqueName: \"kubernetes.io/projected/7fc64888-fe9b-41f5-acd9-877eb75f9263-kube-api-access-m2zkk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.057285 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.057331 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.158563 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2zkk\" (UniqueName: \"kubernetes.io/projected/7fc64888-fe9b-41f5-acd9-877eb75f9263-kube-api-access-m2zkk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.158692 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.158734 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.163686 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.167830 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.188261 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2zkk\" (UniqueName: \"kubernetes.io/projected/7fc64888-fe9b-41f5-acd9-877eb75f9263-kube-api-access-m2zkk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dqwxz\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.333900 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:08:34 crc kubenswrapper[5007]: I0218 20:08:34.918773 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz"] Feb 18 20:08:35 crc kubenswrapper[5007]: I0218 20:08:35.927606 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" event={"ID":"7fc64888-fe9b-41f5-acd9-877eb75f9263","Type":"ContainerStarted","Data":"302c60b9a4b55ee1c54bbd3ee1d1b38aa1b6b9f836cc5abd69a70a5dbdfb2ac3"} Feb 18 20:08:35 crc kubenswrapper[5007]: I0218 20:08:35.927932 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" event={"ID":"7fc64888-fe9b-41f5-acd9-877eb75f9263","Type":"ContainerStarted","Data":"06a54e65f59c169213d6eec3949f07100349aac09aa5449916710cc5a7ebec88"} Feb 18 20:08:35 crc kubenswrapper[5007]: I0218 20:08:35.974041 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" podStartSLOduration=2.5692293299999998 podStartE2EDuration="2.974020487s" podCreationTimestamp="2026-02-18 20:08:33 +0000 UTC" firstStartedPulling="2026-02-18 20:08:34.912488732 +0000 UTC m=+1799.694608266" lastFinishedPulling="2026-02-18 20:08:35.317279859 +0000 UTC m=+1800.099399423" observedRunningTime="2026-02-18 20:08:35.966050151 +0000 UTC m=+1800.748169695" watchObservedRunningTime="2026-02-18 20:08:35.974020487 +0000 UTC m=+1800.756140021" Feb 18 20:08:36 crc kubenswrapper[5007]: I0218 20:08:36.909373 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:08:36 crc kubenswrapper[5007]: E0218 20:08:36.910008 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:08:47 crc kubenswrapper[5007]: I0218 20:08:47.910351 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:08:47 crc kubenswrapper[5007]: E0218 20:08:47.911287 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:08:54 crc kubenswrapper[5007]: I0218 20:08:54.088359 5007 scope.go:117] "RemoveContainer" containerID="291cb0d0d63a264382a202d81b86b0652c420324ead30a0155efabd4dd879465" Feb 18 20:08:54 crc kubenswrapper[5007]: I0218 20:08:54.125367 5007 scope.go:117] "RemoveContainer" containerID="d686fb592e2993bd0ffaae443dec0b7bdfb0388125fdbf6920649836d3d7124d" Feb 18 20:08:54 crc kubenswrapper[5007]: I0218 20:08:54.176305 5007 scope.go:117] "RemoveContainer" containerID="9f60f38d80d84af9b61579d55fbd7f63d1eaa74e05167c6ac3e165fcc9f0afcb" Feb 18 20:08:54 crc kubenswrapper[5007]: I0218 20:08:54.228500 5007 scope.go:117] "RemoveContainer" containerID="83be7aa7dae8537061545300a691e3029a81cf3cdad2a6e498a64d03a02e6a1f" Feb 18 20:08:54 crc kubenswrapper[5007]: I0218 20:08:54.267802 5007 scope.go:117] "RemoveContainer" containerID="351e16fb070cdbaf4f74ed7bb989c0ca1a4950807987f33641c54d4a2484e666" Feb 18 20:08:54 crc kubenswrapper[5007]: I0218 20:08:54.313041 5007 scope.go:117] "RemoveContainer" containerID="1b63fb3cd464e1937b6ac676cbfe0d39717ad20583f308875ee044c9adfcdfc7" Feb 18 20:08:59 crc kubenswrapper[5007]: I0218 20:08:59.910307 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:08:59 crc kubenswrapper[5007]: E0218 20:08:59.911747 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:09:01 crc kubenswrapper[5007]: I0218 20:09:01.070504 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7kccx"] Feb 18 20:09:01 crc kubenswrapper[5007]: I0218 20:09:01.091815 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7kccx"] Feb 18 20:09:01 crc kubenswrapper[5007]: I0218 20:09:01.922777 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6839f27f-6779-4e91-80f8-669c7f4583e2" path="/var/lib/kubelet/pods/6839f27f-6779-4e91-80f8-669c7f4583e2/volumes" Feb 18 20:09:12 crc kubenswrapper[5007]: I0218 20:09:12.909492 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:09:12 crc kubenswrapper[5007]: E0218 20:09:12.910472 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:09:14 crc kubenswrapper[5007]: I0218 20:09:14.355736 5007 generic.go:334] "Generic (PLEG): container finished" podID="7fc64888-fe9b-41f5-acd9-877eb75f9263" containerID="302c60b9a4b55ee1c54bbd3ee1d1b38aa1b6b9f836cc5abd69a70a5dbdfb2ac3" exitCode=0 Feb 18 20:09:14 crc kubenswrapper[5007]: I0218 20:09:14.356173 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" event={"ID":"7fc64888-fe9b-41f5-acd9-877eb75f9263","Type":"ContainerDied","Data":"302c60b9a4b55ee1c54bbd3ee1d1b38aa1b6b9f836cc5abd69a70a5dbdfb2ac3"} Feb 18 20:09:15 crc kubenswrapper[5007]: I0218 20:09:15.868054 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.042299 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-inventory\") pod \"7fc64888-fe9b-41f5-acd9-877eb75f9263\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.042930 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-ssh-key-openstack-edpm-ipam\") pod \"7fc64888-fe9b-41f5-acd9-877eb75f9263\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.043013 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2zkk\" (UniqueName: \"kubernetes.io/projected/7fc64888-fe9b-41f5-acd9-877eb75f9263-kube-api-access-m2zkk\") pod \"7fc64888-fe9b-41f5-acd9-877eb75f9263\" (UID: \"7fc64888-fe9b-41f5-acd9-877eb75f9263\") " Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.063659 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc64888-fe9b-41f5-acd9-877eb75f9263-kube-api-access-m2zkk" (OuterVolumeSpecName: "kube-api-access-m2zkk") pod "7fc64888-fe9b-41f5-acd9-877eb75f9263" (UID: "7fc64888-fe9b-41f5-acd9-877eb75f9263"). InnerVolumeSpecName "kube-api-access-m2zkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.089651 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7fc64888-fe9b-41f5-acd9-877eb75f9263" (UID: "7fc64888-fe9b-41f5-acd9-877eb75f9263"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.146780 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2zkk\" (UniqueName: \"kubernetes.io/projected/7fc64888-fe9b-41f5-acd9-877eb75f9263-kube-api-access-m2zkk\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.146813 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.153541 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-inventory" (OuterVolumeSpecName: "inventory") pod "7fc64888-fe9b-41f5-acd9-877eb75f9263" (UID: "7fc64888-fe9b-41f5-acd9-877eb75f9263"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.248290 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc64888-fe9b-41f5-acd9-877eb75f9263-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.379004 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" event={"ID":"7fc64888-fe9b-41f5-acd9-877eb75f9263","Type":"ContainerDied","Data":"06a54e65f59c169213d6eec3949f07100349aac09aa5449916710cc5a7ebec88"} Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.379041 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a54e65f59c169213d6eec3949f07100349aac09aa5449916710cc5a7ebec88" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.379115 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dqwxz" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.498363 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf"] Feb 18 20:09:16 crc kubenswrapper[5007]: E0218 20:09:16.498795 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc64888-fe9b-41f5-acd9-877eb75f9263" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.498815 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc64888-fe9b-41f5-acd9-877eb75f9263" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.499038 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc64888-fe9b-41f5-acd9-877eb75f9263" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.499764 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.503333 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.503715 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.504176 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.504306 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.513219 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf"] Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.655881 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9rj\" (UniqueName: \"kubernetes.io/projected/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-kube-api-access-bf9rj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.655926 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.655994 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.757613 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.758000 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9rj\" (UniqueName: \"kubernetes.io/projected/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-kube-api-access-bf9rj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.758024 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.761359 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.763320 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.784232 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9rj\" (UniqueName: \"kubernetes.io/projected/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-kube-api-access-bf9rj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:16 crc kubenswrapper[5007]: I0218 20:09:16.822370 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:09:17 crc kubenswrapper[5007]: I0218 20:09:17.440002 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf"] Feb 18 20:09:18 crc kubenswrapper[5007]: I0218 20:09:18.398616 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" event={"ID":"8967b9d7-7c02-4caa-be79-5fd35f2bdb59","Type":"ContainerStarted","Data":"517242fec801aeb7aa719e70137430eb264687273af6e80d795478a0882fadba"} Feb 18 20:09:19 crc kubenswrapper[5007]: I0218 20:09:19.409896 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" event={"ID":"8967b9d7-7c02-4caa-be79-5fd35f2bdb59","Type":"ContainerStarted","Data":"700d01288e8b592d049e9dc4a57d3f74348aa6671ad94b1f41ac246c98335037"} Feb 18 20:09:19 crc kubenswrapper[5007]: I0218 20:09:19.444297 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" podStartSLOduration=2.759402977 podStartE2EDuration="3.444257962s" podCreationTimestamp="2026-02-18 20:09:16 +0000 UTC" firstStartedPulling="2026-02-18 20:09:17.43386202 +0000 UTC m=+1842.215981544" lastFinishedPulling="2026-02-18 20:09:18.118716975 +0000 UTC m=+1842.900836529" observedRunningTime="2026-02-18 20:09:19.433853838 +0000 UTC m=+1844.215973442" watchObservedRunningTime="2026-02-18 20:09:19.444257962 +0000 UTC m=+1844.226377526" Feb 18 20:09:25 crc kubenswrapper[5007]: I0218 20:09:25.061210 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zz4g"] Feb 18 20:09:25 crc kubenswrapper[5007]: I0218 20:09:25.072386 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zz4g"] Feb 18 20:09:25 crc kubenswrapper[5007]: I0218 20:09:25.928948 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c44e9b1-0ed5-466d-9602-e53210c2c34a" path="/var/lib/kubelet/pods/1c44e9b1-0ed5-466d-9602-e53210c2c34a/volumes" Feb 18 20:09:27 crc kubenswrapper[5007]: I0218 20:09:27.909583 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:09:27 crc kubenswrapper[5007]: E0218 20:09:27.910105 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:09:29 crc kubenswrapper[5007]: I0218 20:09:29.030286 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbb2l"] Feb 18 20:09:29 crc kubenswrapper[5007]: I0218 20:09:29.042564 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xbb2l"] Feb 18 20:09:29 crc kubenswrapper[5007]: I0218 20:09:29.922943 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07421c1-d538-4d07-92d5-19e6ffcd0b3e" path="/var/lib/kubelet/pods/b07421c1-d538-4d07-92d5-19e6ffcd0b3e/volumes" Feb 18 20:09:41 crc kubenswrapper[5007]: I0218 20:09:41.911220 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:09:41 crc kubenswrapper[5007]: E0218 20:09:41.912128 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:09:54 crc kubenswrapper[5007]: I0218 20:09:54.471485 5007 scope.go:117] "RemoveContainer" containerID="f3967271c77a886891ecd8f64ea386036c748a9e34033181547499f3b5ae415c" Feb 18 20:09:54 crc kubenswrapper[5007]: I0218 20:09:54.533153 5007 scope.go:117] "RemoveContainer" containerID="b851d4cee6f5b4b62cd371a257eeb53ce2753849df0019589fcfd402020c572b" Feb 18 20:09:54 crc kubenswrapper[5007]: I0218 20:09:54.603944 5007 scope.go:117] "RemoveContainer" containerID="93db4e202598458bd66cb8df69cb4c3d2b2eff78a59f7972ecfd302a3dc92921" Feb 18 20:09:54 crc kubenswrapper[5007]: I0218 20:09:54.909368 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:09:54 crc kubenswrapper[5007]: E0218 20:09:54.909828 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:10:06 crc kubenswrapper[5007]: I0218 20:10:06.910592 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:10:06 crc kubenswrapper[5007]: E0218 20:10:06.911836 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:10:07 crc kubenswrapper[5007]: I0218 20:10:07.918039 5007 generic.go:334] "Generic (PLEG): container finished" podID="8967b9d7-7c02-4caa-be79-5fd35f2bdb59" containerID="700d01288e8b592d049e9dc4a57d3f74348aa6671ad94b1f41ac246c98335037" exitCode=0 Feb 18 20:10:07 crc kubenswrapper[5007]: I0218 20:10:07.927912 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" event={"ID":"8967b9d7-7c02-4caa-be79-5fd35f2bdb59","Type":"ContainerDied","Data":"700d01288e8b592d049e9dc4a57d3f74348aa6671ad94b1f41ac246c98335037"} Feb 18 20:10:08 crc kubenswrapper[5007]: I0218 20:10:08.080489 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-d7zq4"] Feb 18 20:10:08 crc kubenswrapper[5007]: I0218 20:10:08.093202 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-d7zq4"] Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.414285 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.520117 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-inventory\") pod \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.520247 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf9rj\" (UniqueName: \"kubernetes.io/projected/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-kube-api-access-bf9rj\") pod \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.520559 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-ssh-key-openstack-edpm-ipam\") pod \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\" (UID: \"8967b9d7-7c02-4caa-be79-5fd35f2bdb59\") " Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.529046 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-kube-api-access-bf9rj" (OuterVolumeSpecName: "kube-api-access-bf9rj") pod "8967b9d7-7c02-4caa-be79-5fd35f2bdb59" (UID: "8967b9d7-7c02-4caa-be79-5fd35f2bdb59"). InnerVolumeSpecName "kube-api-access-bf9rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.556642 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-inventory" (OuterVolumeSpecName: "inventory") pod "8967b9d7-7c02-4caa-be79-5fd35f2bdb59" (UID: "8967b9d7-7c02-4caa-be79-5fd35f2bdb59"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.558383 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8967b9d7-7c02-4caa-be79-5fd35f2bdb59" (UID: "8967b9d7-7c02-4caa-be79-5fd35f2bdb59"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.623489 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.623532 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.623544 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf9rj\" (UniqueName: \"kubernetes.io/projected/8967b9d7-7c02-4caa-be79-5fd35f2bdb59-kube-api-access-bf9rj\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.929685 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e27342a-023c-465c-b6d6-32dadd9343da" path="/var/lib/kubelet/pods/9e27342a-023c-465c-b6d6-32dadd9343da/volumes" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.941511 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" event={"ID":"8967b9d7-7c02-4caa-be79-5fd35f2bdb59","Type":"ContainerDied","Data":"517242fec801aeb7aa719e70137430eb264687273af6e80d795478a0882fadba"} Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.941572 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="517242fec801aeb7aa719e70137430eb264687273af6e80d795478a0882fadba" Feb 18 20:10:09 crc kubenswrapper[5007]: I0218 20:10:09.941577 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-h9lpf" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.086790 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhbhw"] Feb 18 20:10:10 crc kubenswrapper[5007]: E0218 20:10:10.087236 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8967b9d7-7c02-4caa-be79-5fd35f2bdb59" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.087260 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="8967b9d7-7c02-4caa-be79-5fd35f2bdb59" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.087569 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="8967b9d7-7c02-4caa-be79-5fd35f2bdb59" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.088594 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.091808 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.092050 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.097955 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.098104 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.110821 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhbhw"] Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.235919 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.236068 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c28h\" (UniqueName: \"kubernetes.io/projected/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-kube-api-access-8c28h\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.236103 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.338354 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.338669 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c28h\" (UniqueName: \"kubernetes.io/projected/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-kube-api-access-8c28h\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.338748 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.344899 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.345136 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.367014 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c28h\" (UniqueName: \"kubernetes.io/projected/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-kube-api-access-8c28h\") pod \"ssh-known-hosts-edpm-deployment-vhbhw\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:10 crc kubenswrapper[5007]: I0218 20:10:10.440943 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:11 crc kubenswrapper[5007]: I0218 20:10:11.056409 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhbhw"] Feb 18 20:10:11 crc kubenswrapper[5007]: I0218 20:10:11.059539 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:10:11 crc kubenswrapper[5007]: I0218 20:10:11.968780 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" event={"ID":"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88","Type":"ContainerStarted","Data":"4113caa1de64c47a6a91c97c6f9e3e6eae9170af658475571c5ae864c07506f3"} Feb 18 20:10:11 crc kubenswrapper[5007]: I0218 20:10:11.969103 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" event={"ID":"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88","Type":"ContainerStarted","Data":"7d4c90515d659b67c1f2e098810f762f3bd68c3d837cc0d85cfb937495b6cb89"} Feb 18 20:10:12 crc kubenswrapper[5007]: I0218 20:10:12.003909 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" podStartSLOduration=1.436926506 podStartE2EDuration="2.003883192s" podCreationTimestamp="2026-02-18 20:10:10 +0000 UTC" firstStartedPulling="2026-02-18 20:10:11.059172133 +0000 UTC m=+1895.841291667" lastFinishedPulling="2026-02-18 20:10:11.626128799 +0000 UTC m=+1896.408248353" observedRunningTime="2026-02-18 20:10:11.995605937 +0000 UTC m=+1896.777725501" watchObservedRunningTime="2026-02-18 20:10:12.003883192 +0000 UTC m=+1896.786002726" Feb 18 20:10:18 crc kubenswrapper[5007]: I0218 20:10:18.910893 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:10:19 crc kubenswrapper[5007]: I0218 20:10:19.053865 5007 generic.go:334] "Generic (PLEG): container finished" podID="e3a9d14b-717d-4cb0-9833-f4b2dcf30d88" containerID="4113caa1de64c47a6a91c97c6f9e3e6eae9170af658475571c5ae864c07506f3" exitCode=0 Feb 18 20:10:19 crc kubenswrapper[5007]: I0218 20:10:19.053969 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" event={"ID":"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88","Type":"ContainerDied","Data":"4113caa1de64c47a6a91c97c6f9e3e6eae9170af658475571c5ae864c07506f3"} Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.072056 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"881a8af4d9e904c55cf00c4e5a1e00bdac8f5296c2773898c042023717199565"} Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.507334 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.692182 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-ssh-key-openstack-edpm-ipam\") pod \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.692700 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c28h\" (UniqueName: \"kubernetes.io/projected/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-kube-api-access-8c28h\") pod \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.692765 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-inventory-0\") pod \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\" (UID: \"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88\") " Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.698281 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-kube-api-access-8c28h" (OuterVolumeSpecName: "kube-api-access-8c28h") pod "e3a9d14b-717d-4cb0-9833-f4b2dcf30d88" (UID: "e3a9d14b-717d-4cb0-9833-f4b2dcf30d88"). InnerVolumeSpecName "kube-api-access-8c28h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.718593 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3a9d14b-717d-4cb0-9833-f4b2dcf30d88" (UID: "e3a9d14b-717d-4cb0-9833-f4b2dcf30d88"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.743100 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e3a9d14b-717d-4cb0-9833-f4b2dcf30d88" (UID: "e3a9d14b-717d-4cb0-9833-f4b2dcf30d88"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.796227 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.796277 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c28h\" (UniqueName: \"kubernetes.io/projected/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-kube-api-access-8c28h\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:20 crc kubenswrapper[5007]: I0218 20:10:20.796291 5007 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3a9d14b-717d-4cb0-9833-f4b2dcf30d88-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.087227 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" event={"ID":"e3a9d14b-717d-4cb0-9833-f4b2dcf30d88","Type":"ContainerDied","Data":"7d4c90515d659b67c1f2e098810f762f3bd68c3d837cc0d85cfb937495b6cb89"} Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.087272 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d4c90515d659b67c1f2e098810f762f3bd68c3d837cc0d85cfb937495b6cb89" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.087346 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhbhw" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.209969 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86"] Feb 18 20:10:21 crc kubenswrapper[5007]: E0218 20:10:21.210581 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a9d14b-717d-4cb0-9833-f4b2dcf30d88" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.210653 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a9d14b-717d-4cb0-9833-f4b2dcf30d88" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.210917 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a9d14b-717d-4cb0-9833-f4b2dcf30d88" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.211731 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.216099 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.216320 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.216493 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.216508 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.217858 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86"] Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.311219 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.311315 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jl99\" (UniqueName: \"kubernetes.io/projected/4573bda9-995a-43d9-8a36-14670d4f6425-kube-api-access-9jl99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.311401 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.414287 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.414420 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jl99\" (UniqueName: \"kubernetes.io/projected/4573bda9-995a-43d9-8a36-14670d4f6425-kube-api-access-9jl99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.414647 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.418618 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.418791 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.430043 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jl99\" (UniqueName: \"kubernetes.io/projected/4573bda9-995a-43d9-8a36-14670d4f6425-kube-api-access-9jl99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4tk86\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:21 crc kubenswrapper[5007]: I0218 20:10:21.530600 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:22 crc kubenswrapper[5007]: I0218 20:10:22.153897 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86"] Feb 18 20:10:22 crc kubenswrapper[5007]: W0218 20:10:22.157715 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4573bda9_995a_43d9_8a36_14670d4f6425.slice/crio-330cb3cc5c2a51492a4235f9b474079c92209700e8063a7c5c0b9ced5c34c607 WatchSource:0}: Error finding container 330cb3cc5c2a51492a4235f9b474079c92209700e8063a7c5c0b9ced5c34c607: Status 404 returned error can't find the container with id 330cb3cc5c2a51492a4235f9b474079c92209700e8063a7c5c0b9ced5c34c607 Feb 18 20:10:23 crc kubenswrapper[5007]: I0218 20:10:23.119009 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" event={"ID":"4573bda9-995a-43d9-8a36-14670d4f6425","Type":"ContainerStarted","Data":"9152eed645adf9bcc8a437088e9d4be0537476884af117baf0fba8877c345b9b"} Feb 18 20:10:23 crc kubenswrapper[5007]: I0218 20:10:23.119363 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" event={"ID":"4573bda9-995a-43d9-8a36-14670d4f6425","Type":"ContainerStarted","Data":"330cb3cc5c2a51492a4235f9b474079c92209700e8063a7c5c0b9ced5c34c607"} Feb 18 20:10:23 crc kubenswrapper[5007]: I0218 20:10:23.142383 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" podStartSLOduration=1.706185699 podStartE2EDuration="2.142365545s" podCreationTimestamp="2026-02-18 20:10:21 +0000 UTC" firstStartedPulling="2026-02-18 20:10:22.163415177 +0000 UTC m=+1906.945534711" lastFinishedPulling="2026-02-18 20:10:22.599595003 +0000 UTC m=+1907.381714557" observedRunningTime="2026-02-18 20:10:23.135800899 +0000 UTC m=+1907.917920463" watchObservedRunningTime="2026-02-18 20:10:23.142365545 +0000 UTC m=+1907.924485069" Feb 18 20:10:32 crc kubenswrapper[5007]: I0218 20:10:32.229687 5007 generic.go:334] "Generic (PLEG): container finished" podID="4573bda9-995a-43d9-8a36-14670d4f6425" containerID="9152eed645adf9bcc8a437088e9d4be0537476884af117baf0fba8877c345b9b" exitCode=0 Feb 18 20:10:32 crc kubenswrapper[5007]: I0218 20:10:32.229751 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" event={"ID":"4573bda9-995a-43d9-8a36-14670d4f6425","Type":"ContainerDied","Data":"9152eed645adf9bcc8a437088e9d4be0537476884af117baf0fba8877c345b9b"} Feb 18 20:10:33 crc kubenswrapper[5007]: I0218 20:10:33.752231 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:33 crc kubenswrapper[5007]: I0218 20:10:33.899789 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-ssh-key-openstack-edpm-ipam\") pod \"4573bda9-995a-43d9-8a36-14670d4f6425\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " Feb 18 20:10:33 crc kubenswrapper[5007]: I0218 20:10:33.900554 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-inventory\") pod \"4573bda9-995a-43d9-8a36-14670d4f6425\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " Feb 18 20:10:33 crc kubenswrapper[5007]: I0218 20:10:33.900801 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jl99\" (UniqueName: \"kubernetes.io/projected/4573bda9-995a-43d9-8a36-14670d4f6425-kube-api-access-9jl99\") pod \"4573bda9-995a-43d9-8a36-14670d4f6425\" (UID: \"4573bda9-995a-43d9-8a36-14670d4f6425\") " Feb 18 20:10:33 crc kubenswrapper[5007]: I0218 20:10:33.908795 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4573bda9-995a-43d9-8a36-14670d4f6425-kube-api-access-9jl99" (OuterVolumeSpecName: "kube-api-access-9jl99") pod "4573bda9-995a-43d9-8a36-14670d4f6425" (UID: "4573bda9-995a-43d9-8a36-14670d4f6425"). InnerVolumeSpecName "kube-api-access-9jl99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:33 crc kubenswrapper[5007]: I0218 20:10:33.932204 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-inventory" (OuterVolumeSpecName: "inventory") pod "4573bda9-995a-43d9-8a36-14670d4f6425" (UID: "4573bda9-995a-43d9-8a36-14670d4f6425"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:33 crc kubenswrapper[5007]: I0218 20:10:33.954118 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4573bda9-995a-43d9-8a36-14670d4f6425" (UID: "4573bda9-995a-43d9-8a36-14670d4f6425"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.003623 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.003690 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4573bda9-995a-43d9-8a36-14670d4f6425-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.003712 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jl99\" (UniqueName: \"kubernetes.io/projected/4573bda9-995a-43d9-8a36-14670d4f6425-kube-api-access-9jl99\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.257961 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" event={"ID":"4573bda9-995a-43d9-8a36-14670d4f6425","Type":"ContainerDied","Data":"330cb3cc5c2a51492a4235f9b474079c92209700e8063a7c5c0b9ced5c34c607"} Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.258008 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330cb3cc5c2a51492a4235f9b474079c92209700e8063a7c5c0b9ced5c34c607" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.258149 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4tk86" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.394748 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj"] Feb 18 20:10:34 crc kubenswrapper[5007]: E0218 20:10:34.395237 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4573bda9-995a-43d9-8a36-14670d4f6425" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.395255 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="4573bda9-995a-43d9-8a36-14670d4f6425" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.395525 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="4573bda9-995a-43d9-8a36-14670d4f6425" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.396170 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.399711 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.400093 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.400527 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.400814 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.410091 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj"] Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.518506 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fndzn\" (UniqueName: \"kubernetes.io/projected/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-kube-api-access-fndzn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.518639 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.518697 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.620495 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.620618 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.620729 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fndzn\" (UniqueName: \"kubernetes.io/projected/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-kube-api-access-fndzn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.626047 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.630004 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.647236 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fndzn\" (UniqueName: \"kubernetes.io/projected/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-kube-api-access-fndzn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:34 crc kubenswrapper[5007]: I0218 20:10:34.731285 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:35 crc kubenswrapper[5007]: I0218 20:10:35.342614 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj"] Feb 18 20:10:36 crc kubenswrapper[5007]: I0218 20:10:36.278791 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" event={"ID":"dda9e30c-fe74-463d-9a2c-749a8b47ef7b","Type":"ContainerStarted","Data":"bb55aa0c3720d9628aa59ee83d9a387503b581a8a5c5d4832fd1bc961f51a2f7"} Feb 18 20:10:36 crc kubenswrapper[5007]: I0218 20:10:36.279283 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" event={"ID":"dda9e30c-fe74-463d-9a2c-749a8b47ef7b","Type":"ContainerStarted","Data":"ba10bb2367242a44cd6e7522388bffb186a4e53263131556cdfab4829fbdcbe6"} Feb 18 20:10:36 crc kubenswrapper[5007]: I0218 20:10:36.304284 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" podStartSLOduration=1.8871953019999999 podStartE2EDuration="2.304257977s" podCreationTimestamp="2026-02-18 20:10:34 +0000 UTC" firstStartedPulling="2026-02-18 20:10:35.344210743 +0000 UTC m=+1920.126330307" lastFinishedPulling="2026-02-18 20:10:35.761273418 +0000 UTC m=+1920.543392982" observedRunningTime="2026-02-18 20:10:36.299181073 +0000 UTC m=+1921.081300627" watchObservedRunningTime="2026-02-18 20:10:36.304257977 +0000 UTC m=+1921.086377531" Feb 18 20:10:46 crc kubenswrapper[5007]: I0218 20:10:46.399323 5007 generic.go:334] "Generic (PLEG): container finished" podID="dda9e30c-fe74-463d-9a2c-749a8b47ef7b" containerID="bb55aa0c3720d9628aa59ee83d9a387503b581a8a5c5d4832fd1bc961f51a2f7" exitCode=0 Feb 18 20:10:46 crc kubenswrapper[5007]: I0218 20:10:46.399469 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" event={"ID":"dda9e30c-fe74-463d-9a2c-749a8b47ef7b","Type":"ContainerDied","Data":"bb55aa0c3720d9628aa59ee83d9a387503b581a8a5c5d4832fd1bc961f51a2f7"} Feb 18 20:10:47 crc kubenswrapper[5007]: I0218 20:10:47.904165 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.026985 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fndzn\" (UniqueName: \"kubernetes.io/projected/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-kube-api-access-fndzn\") pod \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.027508 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-inventory\") pod \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.027610 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-ssh-key-openstack-edpm-ipam\") pod \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\" (UID: \"dda9e30c-fe74-463d-9a2c-749a8b47ef7b\") " Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.035348 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-kube-api-access-fndzn" (OuterVolumeSpecName: "kube-api-access-fndzn") pod "dda9e30c-fe74-463d-9a2c-749a8b47ef7b" (UID: "dda9e30c-fe74-463d-9a2c-749a8b47ef7b"). InnerVolumeSpecName "kube-api-access-fndzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.086244 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dda9e30c-fe74-463d-9a2c-749a8b47ef7b" (UID: "dda9e30c-fe74-463d-9a2c-749a8b47ef7b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.087218 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-inventory" (OuterVolumeSpecName: "inventory") pod "dda9e30c-fe74-463d-9a2c-749a8b47ef7b" (UID: "dda9e30c-fe74-463d-9a2c-749a8b47ef7b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.133049 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fndzn\" (UniqueName: \"kubernetes.io/projected/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-kube-api-access-fndzn\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.133261 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.133369 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dda9e30c-fe74-463d-9a2c-749a8b47ef7b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.423335 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" event={"ID":"dda9e30c-fe74-463d-9a2c-749a8b47ef7b","Type":"ContainerDied","Data":"ba10bb2367242a44cd6e7522388bffb186a4e53263131556cdfab4829fbdcbe6"} Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.423457 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba10bb2367242a44cd6e7522388bffb186a4e53263131556cdfab4829fbdcbe6" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.423379 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-9csrj" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.596748 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb"] Feb 18 20:10:48 crc kubenswrapper[5007]: E0218 20:10:48.597562 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda9e30c-fe74-463d-9a2c-749a8b47ef7b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.597711 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda9e30c-fe74-463d-9a2c-749a8b47ef7b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.598101 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda9e30c-fe74-463d-9a2c-749a8b47ef7b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.599148 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.604651 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.605856 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.605935 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.605860 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.606551 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.606549 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb"] Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.607115 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.611990 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.612031 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.656000 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.656155 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.656300 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.656363 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.656508 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.656565 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78l8\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-kube-api-access-b78l8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.656604 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.656898 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.657024 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.657288 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.657376 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.657474 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.657589 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.657739 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759207 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759323 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759392 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759458 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759540 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759585 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78l8\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-kube-api-access-b78l8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759626 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759683 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759727 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759808 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759846 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759891 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.759946 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.760021 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.767582 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.767778 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.768022 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.768173 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.769135 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.769722 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.770330 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.770534 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.770820 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.771525 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.771774 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.774590 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.780596 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.798255 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78l8\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-kube-api-access-b78l8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-brqcb\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:48 crc kubenswrapper[5007]: I0218 20:10:48.929947 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:10:49 crc kubenswrapper[5007]: I0218 20:10:49.357394 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb"] Feb 18 20:10:49 crc kubenswrapper[5007]: I0218 20:10:49.433144 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" event={"ID":"1666a72f-63ae-46c4-8231-2ebeaac48a63","Type":"ContainerStarted","Data":"2410581aaf722538e47ff129143cf42d0e7472b1d2f2421286c9a3595f679af0"} Feb 18 20:10:50 crc kubenswrapper[5007]: I0218 20:10:50.444983 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" event={"ID":"1666a72f-63ae-46c4-8231-2ebeaac48a63","Type":"ContainerStarted","Data":"a4ad5a25eeec78a597681c60eca355c03164f4241fa6f8bc7f8fdb74c3d8e694"} Feb 18 20:10:50 crc kubenswrapper[5007]: I0218 20:10:50.472806 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" podStartSLOduration=1.86193231 podStartE2EDuration="2.47278605s" podCreationTimestamp="2026-02-18 20:10:48 +0000 UTC" firstStartedPulling="2026-02-18 20:10:49.37682158 +0000 UTC m=+1934.158941114" lastFinishedPulling="2026-02-18 20:10:49.98767533 +0000 UTC m=+1934.769794854" observedRunningTime="2026-02-18 20:10:50.471820893 +0000 UTC m=+1935.253940427" watchObservedRunningTime="2026-02-18 20:10:50.47278605 +0000 UTC m=+1935.254905584" Feb 18 20:10:54 crc kubenswrapper[5007]: I0218 20:10:54.710007 5007 scope.go:117] "RemoveContainer" containerID="5977faa6b021fce8c7cc7fdb6f358ea50f9cde97f3edffa25ed6c6c72f98e844" Feb 18 20:11:27 crc kubenswrapper[5007]: I0218 20:11:27.927541 5007 generic.go:334] "Generic (PLEG): container finished" podID="1666a72f-63ae-46c4-8231-2ebeaac48a63" containerID="a4ad5a25eeec78a597681c60eca355c03164f4241fa6f8bc7f8fdb74c3d8e694" exitCode=0 Feb 18 20:11:27 crc kubenswrapper[5007]: I0218 20:11:27.929974 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" event={"ID":"1666a72f-63ae-46c4-8231-2ebeaac48a63","Type":"ContainerDied","Data":"a4ad5a25eeec78a597681c60eca355c03164f4241fa6f8bc7f8fdb74c3d8e694"} Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.413896 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.578053 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.578153 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-libvirt-combined-ca-bundle\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.578261 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-repo-setup-combined-ca-bundle\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.578345 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-neutron-metadata-combined-ca-bundle\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.578420 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ovn-combined-ca-bundle\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.578707 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.579404 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ssh-key-openstack-edpm-ipam\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.580392 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-bootstrap-combined-ca-bundle\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.580648 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.580701 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-nova-combined-ca-bundle\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.580787 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-inventory\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.580884 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.580962 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b78l8\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-kube-api-access-b78l8\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.580996 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-telemetry-combined-ca-bundle\") pod \"1666a72f-63ae-46c4-8231-2ebeaac48a63\" (UID: \"1666a72f-63ae-46c4-8231-2ebeaac48a63\") " Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.584073 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.584793 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.584845 5007 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.585627 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.586085 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.586598 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.587130 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.588160 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.588813 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.589386 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-kube-api-access-b78l8" (OuterVolumeSpecName: "kube-api-access-b78l8") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "kube-api-access-b78l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.589441 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.593876 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.595944 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.621994 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-inventory" (OuterVolumeSpecName: "inventory") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.628686 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1666a72f-63ae-46c4-8231-2ebeaac48a63" (UID: "1666a72f-63ae-46c4-8231-2ebeaac48a63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686372 5007 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686422 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686459 5007 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686472 5007 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686486 5007 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686498 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686511 5007 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686530 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b78l8\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-kube-api-access-b78l8\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686546 5007 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686563 5007 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1666a72f-63ae-46c4-8231-2ebeaac48a63-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686575 5007 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686623 5007 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.686638 5007 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1666a72f-63ae-46c4-8231-2ebeaac48a63-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.955527 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" event={"ID":"1666a72f-63ae-46c4-8231-2ebeaac48a63","Type":"ContainerDied","Data":"2410581aaf722538e47ff129143cf42d0e7472b1d2f2421286c9a3595f679af0"} Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.955563 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2410581aaf722538e47ff129143cf42d0e7472b1d2f2421286c9a3595f679af0" Feb 18 20:11:29 crc kubenswrapper[5007]: I0218 20:11:29.955623 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-brqcb" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.115309 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d"] Feb 18 20:11:30 crc kubenswrapper[5007]: E0218 20:11:30.116115 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1666a72f-63ae-46c4-8231-2ebeaac48a63" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.116151 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="1666a72f-63ae-46c4-8231-2ebeaac48a63" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.116646 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="1666a72f-63ae-46c4-8231-2ebeaac48a63" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.117899 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.120897 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.120957 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.121051 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.121134 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-447rm\" (UniqueName: \"kubernetes.io/projected/7f357b51-937a-4d3e-b437-03ec24109e2d-kube-api-access-447rm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.121237 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.121296 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f357b51-937a-4d3e-b437-03ec24109e2d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.121390 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.121637 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.123762 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.124063 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.133675 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d"] Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.222783 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.223182 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.223326 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-447rm\" (UniqueName: \"kubernetes.io/projected/7f357b51-937a-4d3e-b437-03ec24109e2d-kube-api-access-447rm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.223503 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.223625 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f357b51-937a-4d3e-b437-03ec24109e2d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.225509 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f357b51-937a-4d3e-b437-03ec24109e2d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.229121 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.230062 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.231285 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.253533 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-447rm\" (UniqueName: \"kubernetes.io/projected/7f357b51-937a-4d3e-b437-03ec24109e2d-kube-api-access-447rm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vqx6d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:30 crc kubenswrapper[5007]: I0218 20:11:30.444543 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:11:31 crc kubenswrapper[5007]: I0218 20:11:31.005798 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d"] Feb 18 20:11:31 crc kubenswrapper[5007]: I0218 20:11:31.979967 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" event={"ID":"7f357b51-937a-4d3e-b437-03ec24109e2d","Type":"ContainerStarted","Data":"1d1df64c3a1ea48438a02ba81d127d09d1d7f6cb037cd666181578e3c239f1dd"} Feb 18 20:11:31 crc kubenswrapper[5007]: I0218 20:11:31.980277 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" event={"ID":"7f357b51-937a-4d3e-b437-03ec24109e2d","Type":"ContainerStarted","Data":"019c5ba01a5f36b871ebb15fab2ad76ff920d09e4d3ad7af0c8b9fd60c2b779f"} Feb 18 20:11:32 crc kubenswrapper[5007]: I0218 20:11:32.010352 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" podStartSLOduration=1.522977031 podStartE2EDuration="2.010333595s" podCreationTimestamp="2026-02-18 20:11:30 +0000 UTC" firstStartedPulling="2026-02-18 20:11:31.014626383 +0000 UTC m=+1975.796745907" lastFinishedPulling="2026-02-18 20:11:31.501982947 +0000 UTC m=+1976.284102471" observedRunningTime="2026-02-18 20:11:32.009507302 +0000 UTC m=+1976.791626856" watchObservedRunningTime="2026-02-18 20:11:32.010333595 +0000 UTC m=+1976.792453129" Feb 18 20:12:40 crc kubenswrapper[5007]: I0218 20:12:40.714926 5007 generic.go:334] "Generic (PLEG): container finished" podID="7f357b51-937a-4d3e-b437-03ec24109e2d" containerID="1d1df64c3a1ea48438a02ba81d127d09d1d7f6cb037cd666181578e3c239f1dd" exitCode=0 Feb 18 20:12:40 crc kubenswrapper[5007]: I0218 20:12:40.714987 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" event={"ID":"7f357b51-937a-4d3e-b437-03ec24109e2d","Type":"ContainerDied","Data":"1d1df64c3a1ea48438a02ba81d127d09d1d7f6cb037cd666181578e3c239f1dd"} Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.240784 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.408966 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-447rm\" (UniqueName: \"kubernetes.io/projected/7f357b51-937a-4d3e-b437-03ec24109e2d-kube-api-access-447rm\") pod \"7f357b51-937a-4d3e-b437-03ec24109e2d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.409148 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-inventory\") pod \"7f357b51-937a-4d3e-b437-03ec24109e2d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.409283 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f357b51-937a-4d3e-b437-03ec24109e2d-ovncontroller-config-0\") pod \"7f357b51-937a-4d3e-b437-03ec24109e2d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.409330 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ovn-combined-ca-bundle\") pod \"7f357b51-937a-4d3e-b437-03ec24109e2d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.409634 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ssh-key-openstack-edpm-ipam\") pod \"7f357b51-937a-4d3e-b437-03ec24109e2d\" (UID: \"7f357b51-937a-4d3e-b437-03ec24109e2d\") " Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.422450 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7f357b51-937a-4d3e-b437-03ec24109e2d" (UID: "7f357b51-937a-4d3e-b437-03ec24109e2d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.422524 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f357b51-937a-4d3e-b437-03ec24109e2d-kube-api-access-447rm" (OuterVolumeSpecName: "kube-api-access-447rm") pod "7f357b51-937a-4d3e-b437-03ec24109e2d" (UID: "7f357b51-937a-4d3e-b437-03ec24109e2d"). InnerVolumeSpecName "kube-api-access-447rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.434368 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f357b51-937a-4d3e-b437-03ec24109e2d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7f357b51-937a-4d3e-b437-03ec24109e2d" (UID: "7f357b51-937a-4d3e-b437-03ec24109e2d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.455139 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f357b51-937a-4d3e-b437-03ec24109e2d" (UID: "7f357b51-937a-4d3e-b437-03ec24109e2d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.461903 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-inventory" (OuterVolumeSpecName: "inventory") pod "7f357b51-937a-4d3e-b437-03ec24109e2d" (UID: "7f357b51-937a-4d3e-b437-03ec24109e2d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.516226 5007 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.516632 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.516799 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-447rm\" (UniqueName: \"kubernetes.io/projected/7f357b51-937a-4d3e-b437-03ec24109e2d-kube-api-access-447rm\") on node \"crc\" DevicePath \"\"" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.516982 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f357b51-937a-4d3e-b437-03ec24109e2d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.517135 5007 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f357b51-937a-4d3e-b437-03ec24109e2d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.738075 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" event={"ID":"7f357b51-937a-4d3e-b437-03ec24109e2d","Type":"ContainerDied","Data":"019c5ba01a5f36b871ebb15fab2ad76ff920d09e4d3ad7af0c8b9fd60c2b779f"} Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.738129 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="019c5ba01a5f36b871ebb15fab2ad76ff920d09e4d3ad7af0c8b9fd60c2b779f" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.738109 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vqx6d" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.854864 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8"] Feb 18 20:12:42 crc kubenswrapper[5007]: E0218 20:12:42.855325 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f357b51-937a-4d3e-b437-03ec24109e2d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.855351 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f357b51-937a-4d3e-b437-03ec24109e2d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.855625 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f357b51-937a-4d3e-b437-03ec24109e2d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.856260 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.858290 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.858569 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.858684 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.858812 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.860606 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.873945 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8"] Feb 18 20:12:42 crc kubenswrapper[5007]: I0218 20:12:42.874494 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.026810 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.026932 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.027060 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.027181 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.027235 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.027375 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4lm\" (UniqueName: \"kubernetes.io/projected/7d637e0b-53b5-49d0-85b7-1bee16073b64-kube-api-access-7t4lm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.129410 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.129495 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.129608 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t4lm\" (UniqueName: \"kubernetes.io/projected/7d637e0b-53b5-49d0-85b7-1bee16073b64-kube-api-access-7t4lm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.129691 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.129732 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.129775 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.135577 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.135747 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.136342 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.139817 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.146147 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.166268 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t4lm\" (UniqueName: \"kubernetes.io/projected/7d637e0b-53b5-49d0-85b7-1bee16073b64-kube-api-access-7t4lm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.175833 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:12:43 crc kubenswrapper[5007]: I0218 20:12:43.759390 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8"] Feb 18 20:12:44 crc kubenswrapper[5007]: I0218 20:12:44.764418 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" event={"ID":"7d637e0b-53b5-49d0-85b7-1bee16073b64","Type":"ContainerStarted","Data":"c11a2f1407ca9ca09441500e6c4288b778ceff54db0c9f613eae168bc27cb0b0"} Feb 18 20:12:44 crc kubenswrapper[5007]: I0218 20:12:44.766121 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" event={"ID":"7d637e0b-53b5-49d0-85b7-1bee16073b64","Type":"ContainerStarted","Data":"28e65976781f9818ed8a6ccd12b728be85a254b04b0276aff7661194705fea83"} Feb 18 20:12:44 crc kubenswrapper[5007]: I0218 20:12:44.803325 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" podStartSLOduration=2.255547685 podStartE2EDuration="2.803299516s" podCreationTimestamp="2026-02-18 20:12:42 +0000 UTC" firstStartedPulling="2026-02-18 20:12:43.752162335 +0000 UTC m=+2048.534281899" lastFinishedPulling="2026-02-18 20:12:44.299914206 +0000 UTC m=+2049.082033730" observedRunningTime="2026-02-18 20:12:44.78675904 +0000 UTC m=+2049.568878624" watchObservedRunningTime="2026-02-18 20:12:44.803299516 +0000 UTC m=+2049.585419040" Feb 18 20:12:45 crc kubenswrapper[5007]: I0218 20:12:45.664070 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:12:45 crc kubenswrapper[5007]: I0218 20:12:45.664161 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:13:15 crc kubenswrapper[5007]: I0218 20:13:15.664139 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:13:15 crc kubenswrapper[5007]: I0218 20:13:15.664700 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:13:36 crc kubenswrapper[5007]: I0218 20:13:36.342635 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" event={"ID":"7d637e0b-53b5-49d0-85b7-1bee16073b64","Type":"ContainerDied","Data":"c11a2f1407ca9ca09441500e6c4288b778ceff54db0c9f613eae168bc27cb0b0"} Feb 18 20:13:36 crc kubenswrapper[5007]: I0218 20:13:36.342154 5007 generic.go:334] "Generic (PLEG): container finished" podID="7d637e0b-53b5-49d0-85b7-1bee16073b64" containerID="c11a2f1407ca9ca09441500e6c4288b778ceff54db0c9f613eae168bc27cb0b0" exitCode=0 Feb 18 20:13:37 crc kubenswrapper[5007]: I0218 20:13:37.833731 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.032906 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7d637e0b-53b5-49d0-85b7-1bee16073b64\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.033153 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-ssh-key-openstack-edpm-ipam\") pod \"7d637e0b-53b5-49d0-85b7-1bee16073b64\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.033257 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-metadata-combined-ca-bundle\") pod \"7d637e0b-53b5-49d0-85b7-1bee16073b64\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.033310 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-inventory\") pod \"7d637e0b-53b5-49d0-85b7-1bee16073b64\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.033346 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-nova-metadata-neutron-config-0\") pod \"7d637e0b-53b5-49d0-85b7-1bee16073b64\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.033381 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t4lm\" (UniqueName: \"kubernetes.io/projected/7d637e0b-53b5-49d0-85b7-1bee16073b64-kube-api-access-7t4lm\") pod \"7d637e0b-53b5-49d0-85b7-1bee16073b64\" (UID: \"7d637e0b-53b5-49d0-85b7-1bee16073b64\") " Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.042386 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7d637e0b-53b5-49d0-85b7-1bee16073b64" (UID: "7d637e0b-53b5-49d0-85b7-1bee16073b64"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.060827 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d637e0b-53b5-49d0-85b7-1bee16073b64-kube-api-access-7t4lm" (OuterVolumeSpecName: "kube-api-access-7t4lm") pod "7d637e0b-53b5-49d0-85b7-1bee16073b64" (UID: "7d637e0b-53b5-49d0-85b7-1bee16073b64"). InnerVolumeSpecName "kube-api-access-7t4lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.092877 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7d637e0b-53b5-49d0-85b7-1bee16073b64" (UID: "7d637e0b-53b5-49d0-85b7-1bee16073b64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.092924 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7d637e0b-53b5-49d0-85b7-1bee16073b64" (UID: "7d637e0b-53b5-49d0-85b7-1bee16073b64"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.097610 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7d637e0b-53b5-49d0-85b7-1bee16073b64" (UID: "7d637e0b-53b5-49d0-85b7-1bee16073b64"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.099288 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-inventory" (OuterVolumeSpecName: "inventory") pod "7d637e0b-53b5-49d0-85b7-1bee16073b64" (UID: "7d637e0b-53b5-49d0-85b7-1bee16073b64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.137506 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.137542 5007 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.137560 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.137598 5007 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.137615 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t4lm\" (UniqueName: \"kubernetes.io/projected/7d637e0b-53b5-49d0-85b7-1bee16073b64-kube-api-access-7t4lm\") on node \"crc\" DevicePath \"\"" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.137628 5007 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7d637e0b-53b5-49d0-85b7-1bee16073b64-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.365825 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" event={"ID":"7d637e0b-53b5-49d0-85b7-1bee16073b64","Type":"ContainerDied","Data":"28e65976781f9818ed8a6ccd12b728be85a254b04b0276aff7661194705fea83"} Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.366047 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e65976781f9818ed8a6ccd12b728be85a254b04b0276aff7661194705fea83" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.365884 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kmgw8" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.489829 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g"] Feb 18 20:13:38 crc kubenswrapper[5007]: E0218 20:13:38.490491 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d637e0b-53b5-49d0-85b7-1bee16073b64" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.490524 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d637e0b-53b5-49d0-85b7-1bee16073b64" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.490850 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d637e0b-53b5-49d0-85b7-1bee16073b64" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.491953 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.495332 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.495811 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.496235 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.496572 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.497948 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.503630 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g"] Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.545038 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.545196 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.545283 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.545363 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.545563 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9d2\" (UniqueName: \"kubernetes.io/projected/75664c7f-5fce-483a-941e-6252c6985990-kube-api-access-zc9d2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.647102 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.647238 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9d2\" (UniqueName: \"kubernetes.io/projected/75664c7f-5fce-483a-941e-6252c6985990-kube-api-access-zc9d2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.647360 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.647480 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.647540 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.652936 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.652975 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.653785 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.655037 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.672493 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9d2\" (UniqueName: \"kubernetes.io/projected/75664c7f-5fce-483a-941e-6252c6985990-kube-api-access-zc9d2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dv49g\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:38 crc kubenswrapper[5007]: I0218 20:13:38.813638 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:13:39 crc kubenswrapper[5007]: W0218 20:13:39.468787 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75664c7f_5fce_483a_941e_6252c6985990.slice/crio-a628f71f755c11e83a11ba7c199e9edb58e53bc9fb4b8b7e02bcb7ab77d942d5 WatchSource:0}: Error finding container a628f71f755c11e83a11ba7c199e9edb58e53bc9fb4b8b7e02bcb7ab77d942d5: Status 404 returned error can't find the container with id a628f71f755c11e83a11ba7c199e9edb58e53bc9fb4b8b7e02bcb7ab77d942d5 Feb 18 20:13:39 crc kubenswrapper[5007]: I0218 20:13:39.476590 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g"] Feb 18 20:13:40 crc kubenswrapper[5007]: I0218 20:13:40.386892 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" event={"ID":"75664c7f-5fce-483a-941e-6252c6985990","Type":"ContainerStarted","Data":"5f3b26f51e738d41b25f19fd20120b8affeb9fe760043ac962717d781d8f4f2b"} Feb 18 20:13:40 crc kubenswrapper[5007]: I0218 20:13:40.387226 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" event={"ID":"75664c7f-5fce-483a-941e-6252c6985990","Type":"ContainerStarted","Data":"a628f71f755c11e83a11ba7c199e9edb58e53bc9fb4b8b7e02bcb7ab77d942d5"} Feb 18 20:13:40 crc kubenswrapper[5007]: I0218 20:13:40.403950 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" podStartSLOduration=1.961907917 podStartE2EDuration="2.403934798s" podCreationTimestamp="2026-02-18 20:13:38 +0000 UTC" firstStartedPulling="2026-02-18 20:13:39.470955198 +0000 UTC m=+2104.253074732" lastFinishedPulling="2026-02-18 20:13:39.912982079 +0000 UTC m=+2104.695101613" observedRunningTime="2026-02-18 20:13:40.403044492 +0000 UTC m=+2105.185164026" watchObservedRunningTime="2026-02-18 20:13:40.403934798 +0000 UTC m=+2105.186054322" Feb 18 20:13:45 crc kubenswrapper[5007]: I0218 20:13:45.663625 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:13:45 crc kubenswrapper[5007]: I0218 20:13:45.664222 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:13:45 crc kubenswrapper[5007]: I0218 20:13:45.664288 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:13:45 crc kubenswrapper[5007]: I0218 20:13:45.665338 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"881a8af4d9e904c55cf00c4e5a1e00bdac8f5296c2773898c042023717199565"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:13:45 crc kubenswrapper[5007]: I0218 20:13:45.665404 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://881a8af4d9e904c55cf00c4e5a1e00bdac8f5296c2773898c042023717199565" gracePeriod=600 Feb 18 20:13:46 crc kubenswrapper[5007]: I0218 20:13:46.458310 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="881a8af4d9e904c55cf00c4e5a1e00bdac8f5296c2773898c042023717199565" exitCode=0 Feb 18 20:13:46 crc kubenswrapper[5007]: I0218 20:13:46.458376 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"881a8af4d9e904c55cf00c4e5a1e00bdac8f5296c2773898c042023717199565"} Feb 18 20:13:46 crc kubenswrapper[5007]: I0218 20:13:46.458869 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c"} Feb 18 20:13:46 crc kubenswrapper[5007]: I0218 20:13:46.458891 5007 scope.go:117] "RemoveContainer" containerID="78873bc7c22f803b1def6647729d85949331287e19b8aaf85216e0db28be8f90" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.463021 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jmhpw"] Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.466062 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.477551 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmhpw"] Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.506490 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-utilities\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.506590 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9s5x\" (UniqueName: \"kubernetes.io/projected/168f0b31-dad3-45c1-8159-ccbaa3523322-kube-api-access-c9s5x\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.506713 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-catalog-content\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.608757 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9s5x\" (UniqueName: \"kubernetes.io/projected/168f0b31-dad3-45c1-8159-ccbaa3523322-kube-api-access-c9s5x\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.608885 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-catalog-content\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.608940 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-utilities\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.609458 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-utilities\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.609578 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-catalog-content\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.629523 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9s5x\" (UniqueName: \"kubernetes.io/projected/168f0b31-dad3-45c1-8159-ccbaa3523322-kube-api-access-c9s5x\") pod \"redhat-operators-jmhpw\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:39 crc kubenswrapper[5007]: I0218 20:14:39.802133 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:40 crc kubenswrapper[5007]: I0218 20:14:40.317243 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmhpw"] Feb 18 20:14:41 crc kubenswrapper[5007]: I0218 20:14:41.108110 5007 generic.go:334] "Generic (PLEG): container finished" podID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerID="14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4" exitCode=0 Feb 18 20:14:41 crc kubenswrapper[5007]: I0218 20:14:41.108416 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmhpw" event={"ID":"168f0b31-dad3-45c1-8159-ccbaa3523322","Type":"ContainerDied","Data":"14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4"} Feb 18 20:14:41 crc kubenswrapper[5007]: I0218 20:14:41.108486 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmhpw" event={"ID":"168f0b31-dad3-45c1-8159-ccbaa3523322","Type":"ContainerStarted","Data":"0602f9544f40393c77b1fd262a243326ca825201b17c2d0289023dd172e0499f"} Feb 18 20:14:43 crc kubenswrapper[5007]: I0218 20:14:43.132011 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmhpw" event={"ID":"168f0b31-dad3-45c1-8159-ccbaa3523322","Type":"ContainerStarted","Data":"a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca"} Feb 18 20:14:44 crc kubenswrapper[5007]: I0218 20:14:44.147342 5007 generic.go:334] "Generic (PLEG): container finished" podID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerID="a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca" exitCode=0 Feb 18 20:14:44 crc kubenswrapper[5007]: I0218 20:14:44.147455 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmhpw" event={"ID":"168f0b31-dad3-45c1-8159-ccbaa3523322","Type":"ContainerDied","Data":"a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca"} Feb 18 20:14:45 crc kubenswrapper[5007]: I0218 20:14:45.162979 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmhpw" event={"ID":"168f0b31-dad3-45c1-8159-ccbaa3523322","Type":"ContainerStarted","Data":"46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e"} Feb 18 20:14:45 crc kubenswrapper[5007]: I0218 20:14:45.190704 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jmhpw" podStartSLOduration=2.669983375 podStartE2EDuration="6.190675501s" podCreationTimestamp="2026-02-18 20:14:39 +0000 UTC" firstStartedPulling="2026-02-18 20:14:41.110495953 +0000 UTC m=+2165.892615517" lastFinishedPulling="2026-02-18 20:14:44.631188089 +0000 UTC m=+2169.413307643" observedRunningTime="2026-02-18 20:14:45.183415136 +0000 UTC m=+2169.965534690" watchObservedRunningTime="2026-02-18 20:14:45.190675501 +0000 UTC m=+2169.972795045" Feb 18 20:14:49 crc kubenswrapper[5007]: I0218 20:14:49.803383 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:49 crc kubenswrapper[5007]: I0218 20:14:49.803850 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:50 crc kubenswrapper[5007]: I0218 20:14:50.880942 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jmhpw" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:50 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:50 crc kubenswrapper[5007]: > Feb 18 20:14:59 crc kubenswrapper[5007]: I0218 20:14:59.882657 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:14:59 crc kubenswrapper[5007]: I0218 20:14:59.948702 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.125877 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmhpw"] Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.184381 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9"] Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.186035 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.188653 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.194998 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.196920 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9"] Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.272807 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66497243-bc9b-44fe-9ada-57a39897b3e8-config-volume\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.272915 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6dnt\" (UniqueName: \"kubernetes.io/projected/66497243-bc9b-44fe-9ada-57a39897b3e8-kube-api-access-w6dnt\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.273029 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66497243-bc9b-44fe-9ada-57a39897b3e8-secret-volume\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.375804 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66497243-bc9b-44fe-9ada-57a39897b3e8-config-volume\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.375933 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6dnt\" (UniqueName: \"kubernetes.io/projected/66497243-bc9b-44fe-9ada-57a39897b3e8-kube-api-access-w6dnt\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.376051 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66497243-bc9b-44fe-9ada-57a39897b3e8-secret-volume\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.376860 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66497243-bc9b-44fe-9ada-57a39897b3e8-config-volume\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.383615 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66497243-bc9b-44fe-9ada-57a39897b3e8-secret-volume\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.393731 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6dnt\" (UniqueName: \"kubernetes.io/projected/66497243-bc9b-44fe-9ada-57a39897b3e8-kube-api-access-w6dnt\") pod \"collect-profiles-29524095-fj9h9\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:00 crc kubenswrapper[5007]: I0218 20:15:00.505219 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:01 crc kubenswrapper[5007]: I0218 20:15:01.034042 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9"] Feb 18 20:15:01 crc kubenswrapper[5007]: I0218 20:15:01.336284 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" event={"ID":"66497243-bc9b-44fe-9ada-57a39897b3e8","Type":"ContainerStarted","Data":"e57da5449a57ba51d91b81003feb8fb5a2bbe7efc3b8e117485344f77959efa3"} Feb 18 20:15:01 crc kubenswrapper[5007]: I0218 20:15:01.336353 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" event={"ID":"66497243-bc9b-44fe-9ada-57a39897b3e8","Type":"ContainerStarted","Data":"a64e0a5dda2f0c812432a54cc52da249e7bcda6e7d9bea9c5edcfba0d1fa8c01"} Feb 18 20:15:01 crc kubenswrapper[5007]: I0218 20:15:01.336504 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jmhpw" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="registry-server" containerID="cri-o://46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e" gracePeriod=2 Feb 18 20:15:01 crc kubenswrapper[5007]: I0218 20:15:01.358629 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" podStartSLOduration=1.358605276 podStartE2EDuration="1.358605276s" podCreationTimestamp="2026-02-18 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:15:01.35306628 +0000 UTC m=+2186.135185814" watchObservedRunningTime="2026-02-18 20:15:01.358605276 +0000 UTC m=+2186.140724800" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.015914 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.123071 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9s5x\" (UniqueName: \"kubernetes.io/projected/168f0b31-dad3-45c1-8159-ccbaa3523322-kube-api-access-c9s5x\") pod \"168f0b31-dad3-45c1-8159-ccbaa3523322\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.123427 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-utilities\") pod \"168f0b31-dad3-45c1-8159-ccbaa3523322\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.123581 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-catalog-content\") pod \"168f0b31-dad3-45c1-8159-ccbaa3523322\" (UID: \"168f0b31-dad3-45c1-8159-ccbaa3523322\") " Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.127315 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-utilities" (OuterVolumeSpecName: "utilities") pod "168f0b31-dad3-45c1-8159-ccbaa3523322" (UID: "168f0b31-dad3-45c1-8159-ccbaa3523322"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.146898 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168f0b31-dad3-45c1-8159-ccbaa3523322-kube-api-access-c9s5x" (OuterVolumeSpecName: "kube-api-access-c9s5x") pod "168f0b31-dad3-45c1-8159-ccbaa3523322" (UID: "168f0b31-dad3-45c1-8159-ccbaa3523322"). InnerVolumeSpecName "kube-api-access-c9s5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.228157 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.228186 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9s5x\" (UniqueName: \"kubernetes.io/projected/168f0b31-dad3-45c1-8159-ccbaa3523322-kube-api-access-c9s5x\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.269463 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "168f0b31-dad3-45c1-8159-ccbaa3523322" (UID: "168f0b31-dad3-45c1-8159-ccbaa3523322"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.337911 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/168f0b31-dad3-45c1-8159-ccbaa3523322-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.349276 5007 generic.go:334] "Generic (PLEG): container finished" podID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerID="46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e" exitCode=0 Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.349647 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmhpw" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.349406 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmhpw" event={"ID":"168f0b31-dad3-45c1-8159-ccbaa3523322","Type":"ContainerDied","Data":"46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e"} Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.350320 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmhpw" event={"ID":"168f0b31-dad3-45c1-8159-ccbaa3523322","Type":"ContainerDied","Data":"0602f9544f40393c77b1fd262a243326ca825201b17c2d0289023dd172e0499f"} Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.350384 5007 scope.go:117] "RemoveContainer" containerID="46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.351458 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" event={"ID":"66497243-bc9b-44fe-9ada-57a39897b3e8","Type":"ContainerDied","Data":"e57da5449a57ba51d91b81003feb8fb5a2bbe7efc3b8e117485344f77959efa3"} Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.351466 5007 generic.go:334] "Generic (PLEG): container finished" podID="66497243-bc9b-44fe-9ada-57a39897b3e8" containerID="e57da5449a57ba51d91b81003feb8fb5a2bbe7efc3b8e117485344f77959efa3" exitCode=0 Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.388982 5007 scope.go:117] "RemoveContainer" containerID="a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.407631 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmhpw"] Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.412188 5007 scope.go:117] "RemoveContainer" containerID="14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.418873 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jmhpw"] Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.461185 5007 scope.go:117] "RemoveContainer" containerID="46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e" Feb 18 20:15:02 crc kubenswrapper[5007]: E0218 20:15:02.462922 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e\": container with ID starting with 46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e not found: ID does not exist" containerID="46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.462956 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e"} err="failed to get container status \"46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e\": rpc error: code = NotFound desc = could not find container \"46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e\": container with ID starting with 46d2d304cefe8e07dfd70ab0cfb72643e23bd10c74fa1ebea0cb1179219c925e not found: ID does not exist" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.462980 5007 scope.go:117] "RemoveContainer" containerID="a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca" Feb 18 20:15:02 crc kubenswrapper[5007]: E0218 20:15:02.463233 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca\": container with ID starting with a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca not found: ID does not exist" containerID="a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.463251 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca"} err="failed to get container status \"a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca\": rpc error: code = NotFound desc = could not find container \"a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca\": container with ID starting with a0fb20381181d82e811ea0c3226a36468d5d8d41b101b10cc436bb006eb322ca not found: ID does not exist" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.463265 5007 scope.go:117] "RemoveContainer" containerID="14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4" Feb 18 20:15:02 crc kubenswrapper[5007]: E0218 20:15:02.463529 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4\": container with ID starting with 14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4 not found: ID does not exist" containerID="14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4" Feb 18 20:15:02 crc kubenswrapper[5007]: I0218 20:15:02.463548 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4"} err="failed to get container status \"14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4\": rpc error: code = NotFound desc = could not find container \"14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4\": container with ID starting with 14ed5aa0e32983d297dd88e5ed3f3ad6fdaf30e4c1bbcd233706463730b32fd4 not found: ID does not exist" Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.785373 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.869895 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66497243-bc9b-44fe-9ada-57a39897b3e8-config-volume\") pod \"66497243-bc9b-44fe-9ada-57a39897b3e8\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.870042 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66497243-bc9b-44fe-9ada-57a39897b3e8-secret-volume\") pod \"66497243-bc9b-44fe-9ada-57a39897b3e8\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.870233 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6dnt\" (UniqueName: \"kubernetes.io/projected/66497243-bc9b-44fe-9ada-57a39897b3e8-kube-api-access-w6dnt\") pod \"66497243-bc9b-44fe-9ada-57a39897b3e8\" (UID: \"66497243-bc9b-44fe-9ada-57a39897b3e8\") " Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.870863 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66497243-bc9b-44fe-9ada-57a39897b3e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "66497243-bc9b-44fe-9ada-57a39897b3e8" (UID: "66497243-bc9b-44fe-9ada-57a39897b3e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.871157 5007 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66497243-bc9b-44fe-9ada-57a39897b3e8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.878067 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66497243-bc9b-44fe-9ada-57a39897b3e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66497243-bc9b-44fe-9ada-57a39897b3e8" (UID: "66497243-bc9b-44fe-9ada-57a39897b3e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.883111 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66497243-bc9b-44fe-9ada-57a39897b3e8-kube-api-access-w6dnt" (OuterVolumeSpecName: "kube-api-access-w6dnt") pod "66497243-bc9b-44fe-9ada-57a39897b3e8" (UID: "66497243-bc9b-44fe-9ada-57a39897b3e8"). InnerVolumeSpecName "kube-api-access-w6dnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.927423 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" path="/var/lib/kubelet/pods/168f0b31-dad3-45c1-8159-ccbaa3523322/volumes" Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.973142 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6dnt\" (UniqueName: \"kubernetes.io/projected/66497243-bc9b-44fe-9ada-57a39897b3e8-kube-api-access-w6dnt\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[5007]: I0218 20:15:03.973193 5007 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66497243-bc9b-44fe-9ada-57a39897b3e8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:04 crc kubenswrapper[5007]: I0218 20:15:04.387504 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" event={"ID":"66497243-bc9b-44fe-9ada-57a39897b3e8","Type":"ContainerDied","Data":"a64e0a5dda2f0c812432a54cc52da249e7bcda6e7d9bea9c5edcfba0d1fa8c01"} Feb 18 20:15:04 crc kubenswrapper[5007]: I0218 20:15:04.387544 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a64e0a5dda2f0c812432a54cc52da249e7bcda6e7d9bea9c5edcfba0d1fa8c01" Feb 18 20:15:04 crc kubenswrapper[5007]: I0218 20:15:04.387606 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9" Feb 18 20:15:04 crc kubenswrapper[5007]: I0218 20:15:04.443116 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb"] Feb 18 20:15:04 crc kubenswrapper[5007]: I0218 20:15:04.454047 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-b7rpb"] Feb 18 20:15:05 crc kubenswrapper[5007]: I0218 20:15:05.922575 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370cd8fe-7bde-4e36-b8bc-dca582137d44" path="/var/lib/kubelet/pods/370cd8fe-7bde-4e36-b8bc-dca582137d44/volumes" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.304659 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vzk2b"] Feb 18 20:15:38 crc kubenswrapper[5007]: E0218 20:15:38.305710 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="registry-server" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.305727 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="registry-server" Feb 18 20:15:38 crc kubenswrapper[5007]: E0218 20:15:38.305745 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="extract-content" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.305753 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="extract-content" Feb 18 20:15:38 crc kubenswrapper[5007]: E0218 20:15:38.305780 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="extract-utilities" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.305788 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="extract-utilities" Feb 18 20:15:38 crc kubenswrapper[5007]: E0218 20:15:38.305812 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66497243-bc9b-44fe-9ada-57a39897b3e8" containerName="collect-profiles" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.305819 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="66497243-bc9b-44fe-9ada-57a39897b3e8" containerName="collect-profiles" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.306040 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="168f0b31-dad3-45c1-8159-ccbaa3523322" containerName="registry-server" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.306063 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="66497243-bc9b-44fe-9ada-57a39897b3e8" containerName="collect-profiles" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.307861 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.325121 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzk2b"] Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.418833 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-utilities\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.418875 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-catalog-content\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.419110 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxh9h\" (UniqueName: \"kubernetes.io/projected/f03f8f55-0d58-4531-97be-0431d9d4fe87-kube-api-access-kxh9h\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.521394 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-utilities\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.521465 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-catalog-content\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.521570 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxh9h\" (UniqueName: \"kubernetes.io/projected/f03f8f55-0d58-4531-97be-0431d9d4fe87-kube-api-access-kxh9h\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.522135 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-utilities\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.522189 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-catalog-content\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.542579 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxh9h\" (UniqueName: \"kubernetes.io/projected/f03f8f55-0d58-4531-97be-0431d9d4fe87-kube-api-access-kxh9h\") pod \"redhat-marketplace-vzk2b\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:38 crc kubenswrapper[5007]: I0218 20:15:38.639799 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:39 crc kubenswrapper[5007]: I0218 20:15:39.109889 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzk2b"] Feb 18 20:15:39 crc kubenswrapper[5007]: W0218 20:15:39.122003 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03f8f55_0d58_4531_97be_0431d9d4fe87.slice/crio-ca59b7dde00b5f4696e16a4e50d9111540422827ccd7bd7f63bcf387d58f2359 WatchSource:0}: Error finding container ca59b7dde00b5f4696e16a4e50d9111540422827ccd7bd7f63bcf387d58f2359: Status 404 returned error can't find the container with id ca59b7dde00b5f4696e16a4e50d9111540422827ccd7bd7f63bcf387d58f2359 Feb 18 20:15:39 crc kubenswrapper[5007]: I0218 20:15:39.791939 5007 generic.go:334] "Generic (PLEG): container finished" podID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerID="75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd" exitCode=0 Feb 18 20:15:39 crc kubenswrapper[5007]: I0218 20:15:39.792051 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzk2b" event={"ID":"f03f8f55-0d58-4531-97be-0431d9d4fe87","Type":"ContainerDied","Data":"75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd"} Feb 18 20:15:39 crc kubenswrapper[5007]: I0218 20:15:39.792316 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzk2b" event={"ID":"f03f8f55-0d58-4531-97be-0431d9d4fe87","Type":"ContainerStarted","Data":"ca59b7dde00b5f4696e16a4e50d9111540422827ccd7bd7f63bcf387d58f2359"} Feb 18 20:15:39 crc kubenswrapper[5007]: I0218 20:15:39.795264 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:15:41 crc kubenswrapper[5007]: I0218 20:15:41.819373 5007 generic.go:334] "Generic (PLEG): container finished" podID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerID="366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5" exitCode=0 Feb 18 20:15:41 crc kubenswrapper[5007]: I0218 20:15:41.819474 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzk2b" event={"ID":"f03f8f55-0d58-4531-97be-0431d9d4fe87","Type":"ContainerDied","Data":"366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5"} Feb 18 20:15:42 crc kubenswrapper[5007]: I0218 20:15:42.834112 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzk2b" event={"ID":"f03f8f55-0d58-4531-97be-0431d9d4fe87","Type":"ContainerStarted","Data":"67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459"} Feb 18 20:15:42 crc kubenswrapper[5007]: I0218 20:15:42.869210 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vzk2b" podStartSLOduration=2.44074425 podStartE2EDuration="4.869182366s" podCreationTimestamp="2026-02-18 20:15:38 +0000 UTC" firstStartedPulling="2026-02-18 20:15:39.794702348 +0000 UTC m=+2224.576821912" lastFinishedPulling="2026-02-18 20:15:42.223140474 +0000 UTC m=+2227.005260028" observedRunningTime="2026-02-18 20:15:42.861420767 +0000 UTC m=+2227.643540331" watchObservedRunningTime="2026-02-18 20:15:42.869182366 +0000 UTC m=+2227.651301930" Feb 18 20:15:45 crc kubenswrapper[5007]: I0218 20:15:45.663749 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:15:45 crc kubenswrapper[5007]: I0218 20:15:45.664160 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:15:48 crc kubenswrapper[5007]: I0218 20:15:48.640529 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:48 crc kubenswrapper[5007]: I0218 20:15:48.640832 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:48 crc kubenswrapper[5007]: I0218 20:15:48.707075 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:49 crc kubenswrapper[5007]: I0218 20:15:49.009728 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:49 crc kubenswrapper[5007]: I0218 20:15:49.076365 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzk2b"] Feb 18 20:15:50 crc kubenswrapper[5007]: I0218 20:15:50.952488 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vzk2b" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerName="registry-server" containerID="cri-o://67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459" gracePeriod=2 Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.493019 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.613406 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-catalog-content\") pod \"f03f8f55-0d58-4531-97be-0431d9d4fe87\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.613503 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxh9h\" (UniqueName: \"kubernetes.io/projected/f03f8f55-0d58-4531-97be-0431d9d4fe87-kube-api-access-kxh9h\") pod \"f03f8f55-0d58-4531-97be-0431d9d4fe87\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.613746 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-utilities\") pod \"f03f8f55-0d58-4531-97be-0431d9d4fe87\" (UID: \"f03f8f55-0d58-4531-97be-0431d9d4fe87\") " Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.614653 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-utilities" (OuterVolumeSpecName: "utilities") pod "f03f8f55-0d58-4531-97be-0431d9d4fe87" (UID: "f03f8f55-0d58-4531-97be-0431d9d4fe87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.620526 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03f8f55-0d58-4531-97be-0431d9d4fe87-kube-api-access-kxh9h" (OuterVolumeSpecName: "kube-api-access-kxh9h") pod "f03f8f55-0d58-4531-97be-0431d9d4fe87" (UID: "f03f8f55-0d58-4531-97be-0431d9d4fe87"). InnerVolumeSpecName "kube-api-access-kxh9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.639371 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f03f8f55-0d58-4531-97be-0431d9d4fe87" (UID: "f03f8f55-0d58-4531-97be-0431d9d4fe87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.716380 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.716413 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03f8f55-0d58-4531-97be-0431d9d4fe87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.716442 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxh9h\" (UniqueName: \"kubernetes.io/projected/f03f8f55-0d58-4531-97be-0431d9d4fe87-kube-api-access-kxh9h\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.963279 5007 generic.go:334] "Generic (PLEG): container finished" podID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerID="67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459" exitCode=0 Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.963327 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzk2b" event={"ID":"f03f8f55-0d58-4531-97be-0431d9d4fe87","Type":"ContainerDied","Data":"67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459"} Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.963354 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzk2b" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.963380 5007 scope.go:117] "RemoveContainer" containerID="67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459" Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.963365 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzk2b" event={"ID":"f03f8f55-0d58-4531-97be-0431d9d4fe87","Type":"ContainerDied","Data":"ca59b7dde00b5f4696e16a4e50d9111540422827ccd7bd7f63bcf387d58f2359"} Feb 18 20:15:51 crc kubenswrapper[5007]: I0218 20:15:51.988248 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzk2b"] Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.001084 5007 scope.go:117] "RemoveContainer" containerID="366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5" Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.003145 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzk2b"] Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.026638 5007 scope.go:117] "RemoveContainer" containerID="75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd" Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.087045 5007 scope.go:117] "RemoveContainer" containerID="67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459" Feb 18 20:15:52 crc kubenswrapper[5007]: E0218 20:15:52.087745 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459\": container with ID starting with 67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459 not found: ID does not exist" containerID="67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459" Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.087809 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459"} err="failed to get container status \"67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459\": rpc error: code = NotFound desc = could not find container \"67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459\": container with ID starting with 67e8a02494dc20f304501643a3cc91ceffd953ecdb16cd67d122d0f42a9d0459 not found: ID does not exist" Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.087849 5007 scope.go:117] "RemoveContainer" containerID="366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5" Feb 18 20:15:52 crc kubenswrapper[5007]: E0218 20:15:52.088308 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5\": container with ID starting with 366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5 not found: ID does not exist" containerID="366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5" Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.088344 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5"} err="failed to get container status \"366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5\": rpc error: code = NotFound desc = could not find container \"366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5\": container with ID starting with 366042afaa3d524729347192b9dfb54fd6f30f1347f06f3acc3df9df59a954b5 not found: ID does not exist" Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.088368 5007 scope.go:117] "RemoveContainer" containerID="75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd" Feb 18 20:15:52 crc kubenswrapper[5007]: E0218 20:15:52.088824 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd\": container with ID starting with 75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd not found: ID does not exist" containerID="75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd" Feb 18 20:15:52 crc kubenswrapper[5007]: I0218 20:15:52.088865 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd"} err="failed to get container status \"75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd\": rpc error: code = NotFound desc = could not find container \"75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd\": container with ID starting with 75cdcb98ece73a7679fd06df9f632ea7160ff6d15303879ef2e829f83f8a30cd not found: ID does not exist" Feb 18 20:15:53 crc kubenswrapper[5007]: I0218 20:15:53.931062 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" path="/var/lib/kubelet/pods/f03f8f55-0d58-4531-97be-0431d9d4fe87/volumes" Feb 18 20:15:54 crc kubenswrapper[5007]: I0218 20:15:54.912668 5007 scope.go:117] "RemoveContainer" containerID="8f925a051489d04e66e2fe4e88c59b74fde118e0beae3a7f0b4fcf4152c116c4" Feb 18 20:16:15 crc kubenswrapper[5007]: I0218 20:16:15.664179 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:16:15 crc kubenswrapper[5007]: I0218 20:16:15.664645 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:16:45 crc kubenswrapper[5007]: I0218 20:16:45.663632 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:16:45 crc kubenswrapper[5007]: I0218 20:16:45.665492 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:16:45 crc kubenswrapper[5007]: I0218 20:16:45.665658 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:16:45 crc kubenswrapper[5007]: I0218 20:16:45.666636 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:16:45 crc kubenswrapper[5007]: I0218 20:16:45.666797 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" gracePeriod=600 Feb 18 20:16:45 crc kubenswrapper[5007]: E0218 20:16:45.793917 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:16:46 crc kubenswrapper[5007]: I0218 20:16:46.539493 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" exitCode=0 Feb 18 20:16:46 crc kubenswrapper[5007]: I0218 20:16:46.539562 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c"} Feb 18 20:16:46 crc kubenswrapper[5007]: I0218 20:16:46.539829 5007 scope.go:117] "RemoveContainer" containerID="881a8af4d9e904c55cf00c4e5a1e00bdac8f5296c2773898c042023717199565" Feb 18 20:16:46 crc kubenswrapper[5007]: I0218 20:16:46.540527 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:16:46 crc kubenswrapper[5007]: E0218 20:16:46.540866 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:17:00 crc kubenswrapper[5007]: I0218 20:17:00.909408 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:17:00 crc kubenswrapper[5007]: E0218 20:17:00.910168 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:17:11 crc kubenswrapper[5007]: I0218 20:17:11.912623 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:17:11 crc kubenswrapper[5007]: E0218 20:17:11.913374 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:17:22 crc kubenswrapper[5007]: I0218 20:17:22.909910 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:17:22 crc kubenswrapper[5007]: E0218 20:17:22.911179 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:17:34 crc kubenswrapper[5007]: I0218 20:17:34.910095 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:17:34 crc kubenswrapper[5007]: E0218 20:17:34.911525 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:17:44 crc kubenswrapper[5007]: I0218 20:17:44.139421 5007 generic.go:334] "Generic (PLEG): container finished" podID="75664c7f-5fce-483a-941e-6252c6985990" containerID="5f3b26f51e738d41b25f19fd20120b8affeb9fe760043ac962717d781d8f4f2b" exitCode=0 Feb 18 20:17:44 crc kubenswrapper[5007]: I0218 20:17:44.139521 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" event={"ID":"75664c7f-5fce-483a-941e-6252c6985990","Type":"ContainerDied","Data":"5f3b26f51e738d41b25f19fd20120b8affeb9fe760043ac962717d781d8f4f2b"} Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.720166 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.803094 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9d2\" (UniqueName: \"kubernetes.io/projected/75664c7f-5fce-483a-941e-6252c6985990-kube-api-access-zc9d2\") pod \"75664c7f-5fce-483a-941e-6252c6985990\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.803494 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-secret-0\") pod \"75664c7f-5fce-483a-941e-6252c6985990\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.803723 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-inventory\") pod \"75664c7f-5fce-483a-941e-6252c6985990\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.803930 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-combined-ca-bundle\") pod \"75664c7f-5fce-483a-941e-6252c6985990\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.804185 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-ssh-key-openstack-edpm-ipam\") pod \"75664c7f-5fce-483a-941e-6252c6985990\" (UID: \"75664c7f-5fce-483a-941e-6252c6985990\") " Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.811806 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "75664c7f-5fce-483a-941e-6252c6985990" (UID: "75664c7f-5fce-483a-941e-6252c6985990"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.817098 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75664c7f-5fce-483a-941e-6252c6985990-kube-api-access-zc9d2" (OuterVolumeSpecName: "kube-api-access-zc9d2") pod "75664c7f-5fce-483a-941e-6252c6985990" (UID: "75664c7f-5fce-483a-941e-6252c6985990"). InnerVolumeSpecName "kube-api-access-zc9d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.830322 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "75664c7f-5fce-483a-941e-6252c6985990" (UID: "75664c7f-5fce-483a-941e-6252c6985990"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.831320 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-inventory" (OuterVolumeSpecName: "inventory") pod "75664c7f-5fce-483a-941e-6252c6985990" (UID: "75664c7f-5fce-483a-941e-6252c6985990"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.835641 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "75664c7f-5fce-483a-941e-6252c6985990" (UID: "75664c7f-5fce-483a-941e-6252c6985990"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.906937 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9d2\" (UniqueName: \"kubernetes.io/projected/75664c7f-5fce-483a-941e-6252c6985990-kube-api-access-zc9d2\") on node \"crc\" DevicePath \"\"" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.906990 5007 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.907033 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.907055 5007 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:17:45 crc kubenswrapper[5007]: I0218 20:17:45.907074 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75664c7f-5fce-483a-941e-6252c6985990-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.164012 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" event={"ID":"75664c7f-5fce-483a-941e-6252c6985990","Type":"ContainerDied","Data":"a628f71f755c11e83a11ba7c199e9edb58e53bc9fb4b8b7e02bcb7ab77d942d5"} Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.164106 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a628f71f755c11e83a11ba7c199e9edb58e53bc9fb4b8b7e02bcb7ab77d942d5" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.164148 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dv49g" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.293294 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj"] Feb 18 20:17:46 crc kubenswrapper[5007]: E0218 20:17:46.294325 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerName="extract-content" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.294351 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerName="extract-content" Feb 18 20:17:46 crc kubenswrapper[5007]: E0218 20:17:46.294364 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerName="registry-server" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.294373 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerName="registry-server" Feb 18 20:17:46 crc kubenswrapper[5007]: E0218 20:17:46.294389 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerName="extract-utilities" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.294397 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerName="extract-utilities" Feb 18 20:17:46 crc kubenswrapper[5007]: E0218 20:17:46.294420 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75664c7f-5fce-483a-941e-6252c6985990" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.294446 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="75664c7f-5fce-483a-941e-6252c6985990" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.294749 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="75664c7f-5fce-483a-941e-6252c6985990" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.294774 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03f8f55-0d58-4531-97be-0431d9d4fe87" containerName="registry-server" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.295538 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.301136 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.301783 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.301814 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.301824 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.301783 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.306157 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.314518 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj"] Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.314832 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418012 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418083 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418123 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk6hj\" (UniqueName: \"kubernetes.io/projected/392d5df3-8b9b-4dff-b447-b01b561e96bc-kube-api-access-zk6hj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418173 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418359 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418524 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418739 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418844 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.418960 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.419044 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.419192 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.521034 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.521735 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.521965 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk6hj\" (UniqueName: \"kubernetes.io/projected/392d5df3-8b9b-4dff-b447-b01b561e96bc-kube-api-access-zk6hj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.522075 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.522144 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.522218 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.522322 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.522393 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.528181 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.531355 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.522504 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.533445 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.533562 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.537533 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.537980 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.540258 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.540748 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.541362 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.542999 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.550759 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk6hj\" (UniqueName: \"kubernetes.io/projected/392d5df3-8b9b-4dff-b447-b01b561e96bc-kube-api-access-zk6hj\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.561067 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.561717 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9hjqj\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:46 crc kubenswrapper[5007]: I0218 20:17:46.619299 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:17:47 crc kubenswrapper[5007]: I0218 20:17:47.139315 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj"] Feb 18 20:17:47 crc kubenswrapper[5007]: W0218 20:17:47.145493 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392d5df3_8b9b_4dff_b447_b01b561e96bc.slice/crio-bc396c61d603a84a7f74cb4ee8932225c1b68dceb8c54748b5f3cbb39c034b8b WatchSource:0}: Error finding container bc396c61d603a84a7f74cb4ee8932225c1b68dceb8c54748b5f3cbb39c034b8b: Status 404 returned error can't find the container with id bc396c61d603a84a7f74cb4ee8932225c1b68dceb8c54748b5f3cbb39c034b8b Feb 18 20:17:47 crc kubenswrapper[5007]: I0218 20:17:47.180581 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" event={"ID":"392d5df3-8b9b-4dff-b447-b01b561e96bc","Type":"ContainerStarted","Data":"bc396c61d603a84a7f74cb4ee8932225c1b68dceb8c54748b5f3cbb39c034b8b"} Feb 18 20:17:47 crc kubenswrapper[5007]: I0218 20:17:47.909754 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:17:47 crc kubenswrapper[5007]: E0218 20:17:47.911207 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:17:48 crc kubenswrapper[5007]: I0218 20:17:48.193840 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" event={"ID":"392d5df3-8b9b-4dff-b447-b01b561e96bc","Type":"ContainerStarted","Data":"2ba66b98130fb2e911cd03b90ed9ee942fd3eeb588e75c8ed57d8f03f71084ac"} Feb 18 20:17:48 crc kubenswrapper[5007]: I0218 20:17:48.216054 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" podStartSLOduration=1.819668081 podStartE2EDuration="2.216028833s" podCreationTimestamp="2026-02-18 20:17:46 +0000 UTC" firstStartedPulling="2026-02-18 20:17:47.15765847 +0000 UTC m=+2351.939777994" lastFinishedPulling="2026-02-18 20:17:47.554019212 +0000 UTC m=+2352.336138746" observedRunningTime="2026-02-18 20:17:48.209231413 +0000 UTC m=+2352.991350957" watchObservedRunningTime="2026-02-18 20:17:48.216028833 +0000 UTC m=+2352.998148397" Feb 18 20:18:01 crc kubenswrapper[5007]: I0218 20:18:01.909917 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:18:01 crc kubenswrapper[5007]: E0218 20:18:01.911074 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:18:14 crc kubenswrapper[5007]: I0218 20:18:14.910577 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:18:14 crc kubenswrapper[5007]: E0218 20:18:14.911756 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:18:26 crc kubenswrapper[5007]: I0218 20:18:26.910417 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:18:26 crc kubenswrapper[5007]: E0218 20:18:26.911191 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:18:38 crc kubenswrapper[5007]: I0218 20:18:38.909997 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:18:38 crc kubenswrapper[5007]: E0218 20:18:38.911308 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.725835 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8prjh"] Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.730254 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.751361 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8prjh"] Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.875282 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4g72\" (UniqueName: \"kubernetes.io/projected/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-kube-api-access-d4g72\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.875363 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-catalog-content\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.875399 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-utilities\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.976859 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4g72\" (UniqueName: \"kubernetes.io/projected/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-kube-api-access-d4g72\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.977182 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-catalog-content\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.977247 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-utilities\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.977867 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-catalog-content\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.977938 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-utilities\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:47 crc kubenswrapper[5007]: I0218 20:18:47.996087 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4g72\" (UniqueName: \"kubernetes.io/projected/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-kube-api-access-d4g72\") pod \"certified-operators-8prjh\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:48 crc kubenswrapper[5007]: I0218 20:18:48.073998 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:48 crc kubenswrapper[5007]: I0218 20:18:48.638944 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8prjh"] Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.538706 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v7cbw"] Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.539800 5007 generic.go:334] "Generic (PLEG): container finished" podID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerID="48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004" exitCode=0 Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.542962 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8prjh" event={"ID":"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8","Type":"ContainerDied","Data":"48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004"} Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.542994 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8prjh" event={"ID":"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8","Type":"ContainerStarted","Data":"85e98ed4771b1c1aad7d154978f83569e48e0b5bc0df8031019ab5ffd1202c4a"} Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.543074 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.558095 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7cbw"] Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.715158 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-utilities\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.715584 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-catalog-content\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.715635 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crx86\" (UniqueName: \"kubernetes.io/projected/4c152906-ea2d-4de3-a7c2-8a943dac286c-kube-api-access-crx86\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.817075 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-utilities\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.817183 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-catalog-content\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.817227 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crx86\" (UniqueName: \"kubernetes.io/projected/4c152906-ea2d-4de3-a7c2-8a943dac286c-kube-api-access-crx86\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.817898 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-utilities\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.817985 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-catalog-content\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.838273 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crx86\" (UniqueName: \"kubernetes.io/projected/4c152906-ea2d-4de3-a7c2-8a943dac286c-kube-api-access-crx86\") pod \"community-operators-v7cbw\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:49 crc kubenswrapper[5007]: I0218 20:18:49.890468 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:50 crc kubenswrapper[5007]: I0218 20:18:50.419676 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7cbw"] Feb 18 20:18:50 crc kubenswrapper[5007]: W0218 20:18:50.422839 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c152906_ea2d_4de3_a7c2_8a943dac286c.slice/crio-63ce8ba4ac8488668e5a5b293bdc8a27a495a586edfba1c101166338aa60a72e WatchSource:0}: Error finding container 63ce8ba4ac8488668e5a5b293bdc8a27a495a586edfba1c101166338aa60a72e: Status 404 returned error can't find the container with id 63ce8ba4ac8488668e5a5b293bdc8a27a495a586edfba1c101166338aa60a72e Feb 18 20:18:50 crc kubenswrapper[5007]: I0218 20:18:50.550406 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7cbw" event={"ID":"4c152906-ea2d-4de3-a7c2-8a943dac286c","Type":"ContainerStarted","Data":"63ce8ba4ac8488668e5a5b293bdc8a27a495a586edfba1c101166338aa60a72e"} Feb 18 20:18:50 crc kubenswrapper[5007]: I0218 20:18:50.552598 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8prjh" event={"ID":"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8","Type":"ContainerStarted","Data":"2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953"} Feb 18 20:18:51 crc kubenswrapper[5007]: I0218 20:18:51.564568 5007 generic.go:334] "Generic (PLEG): container finished" podID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerID="03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8" exitCode=0 Feb 18 20:18:51 crc kubenswrapper[5007]: I0218 20:18:51.564638 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7cbw" event={"ID":"4c152906-ea2d-4de3-a7c2-8a943dac286c","Type":"ContainerDied","Data":"03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8"} Feb 18 20:18:51 crc kubenswrapper[5007]: I0218 20:18:51.568033 5007 generic.go:334] "Generic (PLEG): container finished" podID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerID="2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953" exitCode=0 Feb 18 20:18:51 crc kubenswrapper[5007]: I0218 20:18:51.568069 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8prjh" event={"ID":"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8","Type":"ContainerDied","Data":"2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953"} Feb 18 20:18:52 crc kubenswrapper[5007]: I0218 20:18:52.578697 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8prjh" event={"ID":"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8","Type":"ContainerStarted","Data":"8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5"} Feb 18 20:18:52 crc kubenswrapper[5007]: I0218 20:18:52.580749 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7cbw" event={"ID":"4c152906-ea2d-4de3-a7c2-8a943dac286c","Type":"ContainerStarted","Data":"b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a"} Feb 18 20:18:52 crc kubenswrapper[5007]: I0218 20:18:52.608921 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8prjh" podStartSLOduration=3.1674190700000002 podStartE2EDuration="5.608900525s" podCreationTimestamp="2026-02-18 20:18:47 +0000 UTC" firstStartedPulling="2026-02-18 20:18:49.542504866 +0000 UTC m=+2414.324624440" lastFinishedPulling="2026-02-18 20:18:51.983986331 +0000 UTC m=+2416.766105895" observedRunningTime="2026-02-18 20:18:52.598975137 +0000 UTC m=+2417.381094681" watchObservedRunningTime="2026-02-18 20:18:52.608900525 +0000 UTC m=+2417.391020049" Feb 18 20:18:53 crc kubenswrapper[5007]: I0218 20:18:53.594930 5007 generic.go:334] "Generic (PLEG): container finished" podID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerID="b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a" exitCode=0 Feb 18 20:18:53 crc kubenswrapper[5007]: I0218 20:18:53.595010 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7cbw" event={"ID":"4c152906-ea2d-4de3-a7c2-8a943dac286c","Type":"ContainerDied","Data":"b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a"} Feb 18 20:18:53 crc kubenswrapper[5007]: I0218 20:18:53.909326 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:18:53 crc kubenswrapper[5007]: E0218 20:18:53.909605 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:18:54 crc kubenswrapper[5007]: I0218 20:18:54.605473 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7cbw" event={"ID":"4c152906-ea2d-4de3-a7c2-8a943dac286c","Type":"ContainerStarted","Data":"4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de"} Feb 18 20:18:54 crc kubenswrapper[5007]: I0218 20:18:54.623048 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v7cbw" podStartSLOduration=3.189993251 podStartE2EDuration="5.623034832s" podCreationTimestamp="2026-02-18 20:18:49 +0000 UTC" firstStartedPulling="2026-02-18 20:18:51.56763773 +0000 UTC m=+2416.349757264" lastFinishedPulling="2026-02-18 20:18:54.000679321 +0000 UTC m=+2418.782798845" observedRunningTime="2026-02-18 20:18:54.622119337 +0000 UTC m=+2419.404238911" watchObservedRunningTime="2026-02-18 20:18:54.623034832 +0000 UTC m=+2419.405154356" Feb 18 20:18:58 crc kubenswrapper[5007]: I0218 20:18:58.076058 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:58 crc kubenswrapper[5007]: I0218 20:18:58.076689 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:58 crc kubenswrapper[5007]: I0218 20:18:58.175806 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:58 crc kubenswrapper[5007]: I0218 20:18:58.715445 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:18:59 crc kubenswrapper[5007]: I0218 20:18:59.718840 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8prjh"] Feb 18 20:18:59 crc kubenswrapper[5007]: I0218 20:18:59.891504 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:59 crc kubenswrapper[5007]: I0218 20:18:59.891576 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:18:59 crc kubenswrapper[5007]: I0218 20:18:59.974888 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:19:00 crc kubenswrapper[5007]: I0218 20:19:00.677607 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8prjh" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerName="registry-server" containerID="cri-o://8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5" gracePeriod=2 Feb 18 20:19:00 crc kubenswrapper[5007]: I0218 20:19:00.747443 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.160647 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.282736 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-catalog-content\") pod \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.282886 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4g72\" (UniqueName: \"kubernetes.io/projected/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-kube-api-access-d4g72\") pod \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.282953 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-utilities\") pod \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\" (UID: \"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8\") " Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.284108 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-utilities" (OuterVolumeSpecName: "utilities") pod "97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" (UID: "97bbbe3f-3e41-4f76-9469-3d4e5277c0e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.288552 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-kube-api-access-d4g72" (OuterVolumeSpecName: "kube-api-access-d4g72") pod "97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" (UID: "97bbbe3f-3e41-4f76-9469-3d4e5277c0e8"). InnerVolumeSpecName "kube-api-access-d4g72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.385360 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4g72\" (UniqueName: \"kubernetes.io/projected/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-kube-api-access-d4g72\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.385408 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.689657 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" (UID: "97bbbe3f-3e41-4f76-9469-3d4e5277c0e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.693387 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.695801 5007 generic.go:334] "Generic (PLEG): container finished" podID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerID="8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5" exitCode=0 Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.695862 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8prjh" event={"ID":"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8","Type":"ContainerDied","Data":"8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5"} Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.695954 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8prjh" event={"ID":"97bbbe3f-3e41-4f76-9469-3d4e5277c0e8","Type":"ContainerDied","Data":"85e98ed4771b1c1aad7d154978f83569e48e0b5bc0df8031019ab5ffd1202c4a"} Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.695985 5007 scope.go:117] "RemoveContainer" containerID="8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.695892 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8prjh" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.725558 5007 scope.go:117] "RemoveContainer" containerID="2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.744471 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8prjh"] Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.754447 5007 scope.go:117] "RemoveContainer" containerID="48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.763566 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8prjh"] Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.798575 5007 scope.go:117] "RemoveContainer" containerID="8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5" Feb 18 20:19:01 crc kubenswrapper[5007]: E0218 20:19:01.799492 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5\": container with ID starting with 8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5 not found: ID does not exist" containerID="8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.799534 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5"} err="failed to get container status \"8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5\": rpc error: code = NotFound desc = could not find container \"8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5\": container with ID starting with 8c6f5ec680683d7e96ac4fe61eef9436556d36ce6f977052035313d0f47c98e5 not found: ID does not exist" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.799563 5007 scope.go:117] "RemoveContainer" containerID="2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953" Feb 18 20:19:01 crc kubenswrapper[5007]: E0218 20:19:01.800038 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953\": container with ID starting with 2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953 not found: ID does not exist" containerID="2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.800071 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953"} err="failed to get container status \"2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953\": rpc error: code = NotFound desc = could not find container \"2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953\": container with ID starting with 2a89914037bcda842e3a498682584480452775d826ef1a2f5d7c97fbeaadd953 not found: ID does not exist" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.800101 5007 scope.go:117] "RemoveContainer" containerID="48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004" Feb 18 20:19:01 crc kubenswrapper[5007]: E0218 20:19:01.800400 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004\": container with ID starting with 48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004 not found: ID does not exist" containerID="48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.800446 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004"} err="failed to get container status \"48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004\": rpc error: code = NotFound desc = could not find container \"48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004\": container with ID starting with 48004f86f726963d285ab669718dcb166a766e238ed930c3388322067ae96004 not found: ID does not exist" Feb 18 20:19:01 crc kubenswrapper[5007]: I0218 20:19:01.926912 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" path="/var/lib/kubelet/pods/97bbbe3f-3e41-4f76-9469-3d4e5277c0e8/volumes" Feb 18 20:19:02 crc kubenswrapper[5007]: I0218 20:19:02.311546 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7cbw"] Feb 18 20:19:03 crc kubenswrapper[5007]: I0218 20:19:03.719645 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v7cbw" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerName="registry-server" containerID="cri-o://4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de" gracePeriod=2 Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.255185 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.371173 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-catalog-content\") pod \"4c152906-ea2d-4de3-a7c2-8a943dac286c\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.371352 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-utilities\") pod \"4c152906-ea2d-4de3-a7c2-8a943dac286c\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.371469 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crx86\" (UniqueName: \"kubernetes.io/projected/4c152906-ea2d-4de3-a7c2-8a943dac286c-kube-api-access-crx86\") pod \"4c152906-ea2d-4de3-a7c2-8a943dac286c\" (UID: \"4c152906-ea2d-4de3-a7c2-8a943dac286c\") " Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.372485 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-utilities" (OuterVolumeSpecName: "utilities") pod "4c152906-ea2d-4de3-a7c2-8a943dac286c" (UID: "4c152906-ea2d-4de3-a7c2-8a943dac286c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.377921 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c152906-ea2d-4de3-a7c2-8a943dac286c-kube-api-access-crx86" (OuterVolumeSpecName: "kube-api-access-crx86") pod "4c152906-ea2d-4de3-a7c2-8a943dac286c" (UID: "4c152906-ea2d-4de3-a7c2-8a943dac286c"). InnerVolumeSpecName "kube-api-access-crx86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.426056 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c152906-ea2d-4de3-a7c2-8a943dac286c" (UID: "4c152906-ea2d-4de3-a7c2-8a943dac286c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.474482 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.474534 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crx86\" (UniqueName: \"kubernetes.io/projected/4c152906-ea2d-4de3-a7c2-8a943dac286c-kube-api-access-crx86\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.474554 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c152906-ea2d-4de3-a7c2-8a943dac286c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.731331 5007 generic.go:334] "Generic (PLEG): container finished" podID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerID="4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de" exitCode=0 Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.731394 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7cbw" event={"ID":"4c152906-ea2d-4de3-a7c2-8a943dac286c","Type":"ContainerDied","Data":"4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de"} Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.731489 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7cbw" event={"ID":"4c152906-ea2d-4de3-a7c2-8a943dac286c","Type":"ContainerDied","Data":"63ce8ba4ac8488668e5a5b293bdc8a27a495a586edfba1c101166338aa60a72e"} Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.731411 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7cbw" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.731516 5007 scope.go:117] "RemoveContainer" containerID="4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.767963 5007 scope.go:117] "RemoveContainer" containerID="b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.786075 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7cbw"] Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.800505 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v7cbw"] Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.806738 5007 scope.go:117] "RemoveContainer" containerID="03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.851310 5007 scope.go:117] "RemoveContainer" containerID="4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de" Feb 18 20:19:04 crc kubenswrapper[5007]: E0218 20:19:04.851962 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de\": container with ID starting with 4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de not found: ID does not exist" containerID="4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.851996 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de"} err="failed to get container status \"4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de\": rpc error: code = NotFound desc = could not find container \"4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de\": container with ID starting with 4f291cb6f64b616268666cea9d9fb94e744442bc9093050b4c7b25fa4d8936de not found: ID does not exist" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.852018 5007 scope.go:117] "RemoveContainer" containerID="b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a" Feb 18 20:19:04 crc kubenswrapper[5007]: E0218 20:19:04.852440 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a\": container with ID starting with b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a not found: ID does not exist" containerID="b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.852463 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a"} err="failed to get container status \"b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a\": rpc error: code = NotFound desc = could not find container \"b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a\": container with ID starting with b8c689210ab2e97252b9b1c15044fd8548f4dff4d5a38cb2501fac5d7297f13a not found: ID does not exist" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.852475 5007 scope.go:117] "RemoveContainer" containerID="03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8" Feb 18 20:19:04 crc kubenswrapper[5007]: E0218 20:19:04.852687 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8\": container with ID starting with 03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8 not found: ID does not exist" containerID="03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8" Feb 18 20:19:04 crc kubenswrapper[5007]: I0218 20:19:04.852709 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8"} err="failed to get container status \"03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8\": rpc error: code = NotFound desc = could not find container \"03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8\": container with ID starting with 03862f8384335b2d6b13ca18ea6b8e1ce58009c9ced11c8dd290a39138269eb8 not found: ID does not exist" Feb 18 20:19:05 crc kubenswrapper[5007]: I0218 20:19:05.926340 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" path="/var/lib/kubelet/pods/4c152906-ea2d-4de3-a7c2-8a943dac286c/volumes" Feb 18 20:19:07 crc kubenswrapper[5007]: I0218 20:19:07.910500 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:19:07 crc kubenswrapper[5007]: E0218 20:19:07.912417 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:19:21 crc kubenswrapper[5007]: I0218 20:19:21.910018 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:19:21 crc kubenswrapper[5007]: E0218 20:19:21.911164 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:19:33 crc kubenswrapper[5007]: I0218 20:19:33.911597 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:19:33 crc kubenswrapper[5007]: E0218 20:19:33.912769 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:19:45 crc kubenswrapper[5007]: I0218 20:19:45.924901 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:19:45 crc kubenswrapper[5007]: E0218 20:19:45.926035 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:19:59 crc kubenswrapper[5007]: I0218 20:19:59.910125 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:19:59 crc kubenswrapper[5007]: E0218 20:19:59.911061 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:20:10 crc kubenswrapper[5007]: I0218 20:20:10.909240 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:20:10 crc kubenswrapper[5007]: E0218 20:20:10.909922 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:20:23 crc kubenswrapper[5007]: I0218 20:20:23.909653 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:20:23 crc kubenswrapper[5007]: E0218 20:20:23.910319 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:20:27 crc kubenswrapper[5007]: I0218 20:20:27.649400 5007 generic.go:334] "Generic (PLEG): container finished" podID="392d5df3-8b9b-4dff-b447-b01b561e96bc" containerID="2ba66b98130fb2e911cd03b90ed9ee942fd3eeb588e75c8ed57d8f03f71084ac" exitCode=0 Feb 18 20:20:27 crc kubenswrapper[5007]: I0218 20:20:27.649766 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" event={"ID":"392d5df3-8b9b-4dff-b447-b01b561e96bc","Type":"ContainerDied","Data":"2ba66b98130fb2e911cd03b90ed9ee942fd3eeb588e75c8ed57d8f03f71084ac"} Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.190068 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263032 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-0\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263087 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-ssh-key-openstack-edpm-ipam\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263149 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-1\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263297 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-combined-ca-bundle\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263396 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-inventory\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263454 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-2\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263483 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-0\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263528 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-3\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263555 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk6hj\" (UniqueName: \"kubernetes.io/projected/392d5df3-8b9b-4dff-b447-b01b561e96bc-kube-api-access-zk6hj\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263577 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-1\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.263604 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-extra-config-0\") pod \"392d5df3-8b9b-4dff-b447-b01b561e96bc\" (UID: \"392d5df3-8b9b-4dff-b447-b01b561e96bc\") " Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.270148 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.270504 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392d5df3-8b9b-4dff-b447-b01b561e96bc-kube-api-access-zk6hj" (OuterVolumeSpecName: "kube-api-access-zk6hj") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "kube-api-access-zk6hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.308817 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-inventory" (OuterVolumeSpecName: "inventory") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.309581 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.320705 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.324519 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.332045 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.334615 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.337012 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.347265 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.357668 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "392d5df3-8b9b-4dff-b447-b01b561e96bc" (UID: "392d5df3-8b9b-4dff-b447-b01b561e96bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366361 5007 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366421 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366450 5007 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366461 5007 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366475 5007 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366486 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk6hj\" (UniqueName: \"kubernetes.io/projected/392d5df3-8b9b-4dff-b447-b01b561e96bc-kube-api-access-zk6hj\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366496 5007 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366511 5007 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366524 5007 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366538 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.366549 5007 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/392d5df3-8b9b-4dff-b447-b01b561e96bc-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.672678 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" event={"ID":"392d5df3-8b9b-4dff-b447-b01b561e96bc","Type":"ContainerDied","Data":"bc396c61d603a84a7f74cb4ee8932225c1b68dceb8c54748b5f3cbb39c034b8b"} Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.673040 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc396c61d603a84a7f74cb4ee8932225c1b68dceb8c54748b5f3cbb39c034b8b" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.672788 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9hjqj" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.817719 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f"] Feb 18 20:20:29 crc kubenswrapper[5007]: E0218 20:20:29.818254 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerName="extract-utilities" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818286 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerName="extract-utilities" Feb 18 20:20:29 crc kubenswrapper[5007]: E0218 20:20:29.818318 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerName="extract-utilities" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818330 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerName="extract-utilities" Feb 18 20:20:29 crc kubenswrapper[5007]: E0218 20:20:29.818342 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerName="registry-server" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818355 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerName="registry-server" Feb 18 20:20:29 crc kubenswrapper[5007]: E0218 20:20:29.818382 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerName="extract-content" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818392 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerName="extract-content" Feb 18 20:20:29 crc kubenswrapper[5007]: E0218 20:20:29.818414 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerName="extract-content" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818423 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerName="extract-content" Feb 18 20:20:29 crc kubenswrapper[5007]: E0218 20:20:29.818480 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392d5df3-8b9b-4dff-b447-b01b561e96bc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818492 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="392d5df3-8b9b-4dff-b447-b01b561e96bc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:20:29 crc kubenswrapper[5007]: E0218 20:20:29.818511 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerName="registry-server" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818521 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerName="registry-server" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818807 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bbbe3f-3e41-4f76-9469-3d4e5277c0e8" containerName="registry-server" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818832 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="392d5df3-8b9b-4dff-b447-b01b561e96bc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.818857 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c152906-ea2d-4de3-a7c2-8a943dac286c" containerName="registry-server" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.819919 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.826020 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.826239 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.826567 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.826687 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.826808 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r599f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.831576 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f"] Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.877324 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.877509 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.877614 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.877648 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.877815 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z2x6\" (UniqueName: \"kubernetes.io/projected/00a0d610-ff3f-4a53-9edc-526b8e35385e-kube-api-access-7z2x6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.878185 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.878255 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.980176 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.980217 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.980299 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z2x6\" (UniqueName: \"kubernetes.io/projected/00a0d610-ff3f-4a53-9edc-526b8e35385e-kube-api-access-7z2x6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.981935 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.982558 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.982826 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.983042 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.985702 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.985935 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.987115 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.987514 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.987801 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:29 crc kubenswrapper[5007]: I0218 20:20:29.987996 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:30 crc kubenswrapper[5007]: I0218 20:20:30.009293 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z2x6\" (UniqueName: \"kubernetes.io/projected/00a0d610-ff3f-4a53-9edc-526b8e35385e-kube-api-access-7z2x6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:30 crc kubenswrapper[5007]: I0218 20:20:30.150258 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:20:30 crc kubenswrapper[5007]: I0218 20:20:30.801580 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f"] Feb 18 20:20:31 crc kubenswrapper[5007]: I0218 20:20:31.704780 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" event={"ID":"00a0d610-ff3f-4a53-9edc-526b8e35385e","Type":"ContainerStarted","Data":"e6ef1f3604112ea325f872db5436f8ec45b7ba84f217d2a4886ae98d32e5df8d"} Feb 18 20:20:31 crc kubenswrapper[5007]: I0218 20:20:31.705348 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" event={"ID":"00a0d610-ff3f-4a53-9edc-526b8e35385e","Type":"ContainerStarted","Data":"7129c22cd0024fb752f16950d7849f31a15f847c14c033668313f163d613d87e"} Feb 18 20:20:31 crc kubenswrapper[5007]: I0218 20:20:31.726523 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" podStartSLOduration=2.181809851 podStartE2EDuration="2.726501721s" podCreationTimestamp="2026-02-18 20:20:29 +0000 UTC" firstStartedPulling="2026-02-18 20:20:30.812257559 +0000 UTC m=+2515.594377093" lastFinishedPulling="2026-02-18 20:20:31.356949419 +0000 UTC m=+2516.139068963" observedRunningTime="2026-02-18 20:20:31.723663792 +0000 UTC m=+2516.505783326" watchObservedRunningTime="2026-02-18 20:20:31.726501721 +0000 UTC m=+2516.508621255" Feb 18 20:20:38 crc kubenswrapper[5007]: I0218 20:20:38.910732 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:20:38 crc kubenswrapper[5007]: E0218 20:20:38.911517 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:20:52 crc kubenswrapper[5007]: I0218 20:20:52.910541 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:20:52 crc kubenswrapper[5007]: E0218 20:20:52.911662 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:21:06 crc kubenswrapper[5007]: I0218 20:21:06.911495 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:21:06 crc kubenswrapper[5007]: E0218 20:21:06.912497 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:21:18 crc kubenswrapper[5007]: I0218 20:21:18.910978 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:21:18 crc kubenswrapper[5007]: E0218 20:21:18.912218 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:21:33 crc kubenswrapper[5007]: I0218 20:21:33.911193 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:21:33 crc kubenswrapper[5007]: E0218 20:21:33.912673 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:21:47 crc kubenswrapper[5007]: I0218 20:21:47.909406 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:21:48 crc kubenswrapper[5007]: I0218 20:21:48.502032 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"ade2056cc529ad39f6325110d36bcbcbf391b085c57c1893fdaf86eed673e5be"} Feb 18 20:22:37 crc kubenswrapper[5007]: I0218 20:22:37.006855 5007 generic.go:334] "Generic (PLEG): container finished" podID="00a0d610-ff3f-4a53-9edc-526b8e35385e" containerID="e6ef1f3604112ea325f872db5436f8ec45b7ba84f217d2a4886ae98d32e5df8d" exitCode=0 Feb 18 20:22:37 crc kubenswrapper[5007]: I0218 20:22:37.006972 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" event={"ID":"00a0d610-ff3f-4a53-9edc-526b8e35385e","Type":"ContainerDied","Data":"e6ef1f3604112ea325f872db5436f8ec45b7ba84f217d2a4886ae98d32e5df8d"} Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.507396 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.647261 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-telemetry-combined-ca-bundle\") pod \"00a0d610-ff3f-4a53-9edc-526b8e35385e\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.647317 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ssh-key-openstack-edpm-ipam\") pod \"00a0d610-ff3f-4a53-9edc-526b8e35385e\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.647392 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z2x6\" (UniqueName: \"kubernetes.io/projected/00a0d610-ff3f-4a53-9edc-526b8e35385e-kube-api-access-7z2x6\") pod \"00a0d610-ff3f-4a53-9edc-526b8e35385e\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.647444 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-0\") pod \"00a0d610-ff3f-4a53-9edc-526b8e35385e\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.647492 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-1\") pod \"00a0d610-ff3f-4a53-9edc-526b8e35385e\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.647540 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-2\") pod \"00a0d610-ff3f-4a53-9edc-526b8e35385e\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.648336 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-inventory\") pod \"00a0d610-ff3f-4a53-9edc-526b8e35385e\" (UID: \"00a0d610-ff3f-4a53-9edc-526b8e35385e\") " Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.654804 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "00a0d610-ff3f-4a53-9edc-526b8e35385e" (UID: "00a0d610-ff3f-4a53-9edc-526b8e35385e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.655237 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00a0d610-ff3f-4a53-9edc-526b8e35385e-kube-api-access-7z2x6" (OuterVolumeSpecName: "kube-api-access-7z2x6") pod "00a0d610-ff3f-4a53-9edc-526b8e35385e" (UID: "00a0d610-ff3f-4a53-9edc-526b8e35385e"). InnerVolumeSpecName "kube-api-access-7z2x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.677804 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "00a0d610-ff3f-4a53-9edc-526b8e35385e" (UID: "00a0d610-ff3f-4a53-9edc-526b8e35385e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.678277 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00a0d610-ff3f-4a53-9edc-526b8e35385e" (UID: "00a0d610-ff3f-4a53-9edc-526b8e35385e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.683158 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "00a0d610-ff3f-4a53-9edc-526b8e35385e" (UID: "00a0d610-ff3f-4a53-9edc-526b8e35385e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.684726 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-inventory" (OuterVolumeSpecName: "inventory") pod "00a0d610-ff3f-4a53-9edc-526b8e35385e" (UID: "00a0d610-ff3f-4a53-9edc-526b8e35385e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.703330 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "00a0d610-ff3f-4a53-9edc-526b8e35385e" (UID: "00a0d610-ff3f-4a53-9edc-526b8e35385e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.751889 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z2x6\" (UniqueName: \"kubernetes.io/projected/00a0d610-ff3f-4a53-9edc-526b8e35385e-kube-api-access-7z2x6\") on node \"crc\" DevicePath \"\"" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.751946 5007 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.751990 5007 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.752014 5007 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.752036 5007 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.752055 5007 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:22:38 crc kubenswrapper[5007]: I0218 20:22:38.752073 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00a0d610-ff3f-4a53-9edc-526b8e35385e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:22:39 crc kubenswrapper[5007]: I0218 20:22:39.028726 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" event={"ID":"00a0d610-ff3f-4a53-9edc-526b8e35385e","Type":"ContainerDied","Data":"7129c22cd0024fb752f16950d7849f31a15f847c14c033668313f163d613d87e"} Feb 18 20:22:39 crc kubenswrapper[5007]: I0218 20:22:39.028817 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7129c22cd0024fb752f16950d7849f31a15f847c14c033668313f163d613d87e" Feb 18 20:22:39 crc kubenswrapper[5007]: I0218 20:22:39.028754 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rcn5f" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.051742 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:23:15 crc kubenswrapper[5007]: E0218 20:23:15.053810 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a0d610-ff3f-4a53-9edc-526b8e35385e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.053831 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a0d610-ff3f-4a53-9edc-526b8e35385e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.054095 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="00a0d610-ff3f-4a53-9edc-526b8e35385e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.055406 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.060259 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.076126 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.097693 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.097761 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb5tg\" (UniqueName: \"kubernetes.io/projected/c24f7aa1-a329-46bf-8727-da5b751e71b4-kube-api-access-rb5tg\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.097936 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098046 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-dev\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098084 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098147 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098167 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-run\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098214 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098281 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098350 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098385 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-lib-modules\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098469 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-config-data\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098506 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098535 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-scripts\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.098631 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-sys\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.185018 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.186815 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.190666 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200006 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200058 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-run\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200099 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200149 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200197 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200207 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-run\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200228 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-lib-modules\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200259 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-lib-modules\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200289 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200366 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-config-data\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200404 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200453 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-scripts\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200535 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-sys\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200597 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200603 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200632 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb5tg\" (UniqueName: \"kubernetes.io/projected/c24f7aa1-a329-46bf-8727-da5b751e71b4-kube-api-access-rb5tg\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200643 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-sys\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200669 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200703 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-dev\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200725 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200728 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200808 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-dev\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.200968 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-etc-nvme\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.201644 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.201736 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c24f7aa1-a329-46bf-8727-da5b751e71b4-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.206603 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.226655 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-scripts\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.226830 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.227046 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-config-data-custom\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.228227 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24f7aa1-a329-46bf-8727-da5b751e71b4-config-data\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.244467 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb5tg\" (UniqueName: \"kubernetes.io/projected/c24f7aa1-a329-46bf-8727-da5b751e71b4-kube-api-access-rb5tg\") pod \"cinder-backup-0\" (UID: \"c24f7aa1-a329-46bf-8727-da5b751e71b4\") " pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.250482 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.252588 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.257185 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.294767 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302319 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302370 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302395 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302490 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302531 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302550 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302575 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-sys\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302598 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302638 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302658 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302680 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302702 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302726 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302762 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302788 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302813 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plwm\" (UniqueName: \"kubernetes.io/projected/04aecc59-06d4-4479-a370-8cba2af7867b-kube-api-access-8plwm\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302846 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302871 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302903 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302921 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302953 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.302997 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqkrf\" (UniqueName: \"kubernetes.io/projected/eeef0501-abfc-4d81-8799-59aeea395a9d-kube-api-access-sqkrf\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.303058 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.303080 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.303100 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-dev\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.303129 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.303151 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.303177 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.303202 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-run\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.303221 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.375516 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405156 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405205 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405233 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-dev\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405269 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405274 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405293 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405412 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405475 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-run\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405500 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405565 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405584 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405604 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405677 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405739 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405760 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405798 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-sys\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405819 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405879 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405899 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405914 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405952 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.405983 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406031 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406062 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406092 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8plwm\" (UniqueName: \"kubernetes.io/projected/04aecc59-06d4-4479-a370-8cba2af7867b-kube-api-access-8plwm\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406138 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406165 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406203 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406219 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406268 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406331 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqkrf\" (UniqueName: \"kubernetes.io/projected/eeef0501-abfc-4d81-8799-59aeea395a9d-kube-api-access-sqkrf\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406739 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406797 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406851 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406860 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-dev\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406902 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-run\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406959 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406995 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.407028 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.407060 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.407133 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.407182 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.407215 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.409752 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.409823 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-sys\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.409857 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.406806 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.409886 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.409954 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eeef0501-abfc-4d81-8799-59aeea395a9d-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.409987 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/04aecc59-06d4-4479-a370-8cba2af7867b-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.410648 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.411363 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.412614 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.413278 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.413763 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.414398 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.414957 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aecc59-06d4-4479-a370-8cba2af7867b-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.429588 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eeef0501-abfc-4d81-8799-59aeea395a9d-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.430067 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8plwm\" (UniqueName: \"kubernetes.io/projected/04aecc59-06d4-4479-a370-8cba2af7867b-kube-api-access-8plwm\") pod \"cinder-volume-nfs-2-0\" (UID: \"04aecc59-06d4-4479-a370-8cba2af7867b\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.441337 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqkrf\" (UniqueName: \"kubernetes.io/projected/eeef0501-abfc-4d81-8799-59aeea395a9d-kube-api-access-sqkrf\") pod \"cinder-volume-nfs-0\" (UID: \"eeef0501-abfc-4d81-8799-59aeea395a9d\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.530515 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.620074 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.932461 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:23:15 crc kubenswrapper[5007]: I0218 20:23:15.939131 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:23:16 crc kubenswrapper[5007]: I0218 20:23:16.280084 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:23:16 crc kubenswrapper[5007]: I0218 20:23:16.476017 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:23:16 crc kubenswrapper[5007]: W0218 20:23:16.509482 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeef0501_abfc_4d81_8799_59aeea395a9d.slice/crio-f28feebbaf1656098d8ac6844c073d9c2226e37664143be7c3ade27acf2e5f8e WatchSource:0}: Error finding container f28feebbaf1656098d8ac6844c073d9c2226e37664143be7c3ade27acf2e5f8e: Status 404 returned error can't find the container with id f28feebbaf1656098d8ac6844c073d9c2226e37664143be7c3ade27acf2e5f8e Feb 18 20:23:16 crc kubenswrapper[5007]: I0218 20:23:16.525450 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"04aecc59-06d4-4479-a370-8cba2af7867b","Type":"ContainerStarted","Data":"ebf02ea37efd7db7e5d2b386e7210724c4c14338373e3dd7021200a79d92f652"} Feb 18 20:23:16 crc kubenswrapper[5007]: I0218 20:23:16.538369 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c24f7aa1-a329-46bf-8727-da5b751e71b4","Type":"ContainerStarted","Data":"93f18a1916ac542f4ca4f1cbf6812973611eeffeebd58cd555371fb95832a102"} Feb 18 20:23:16 crc kubenswrapper[5007]: I0218 20:23:16.538454 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c24f7aa1-a329-46bf-8727-da5b751e71b4","Type":"ContainerStarted","Data":"1ea3fb874a0f40ada89260b80006f6e804cfa7e9c3fa74efe96fad6fc944463f"} Feb 18 20:23:17 crc kubenswrapper[5007]: I0218 20:23:17.556310 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"eeef0501-abfc-4d81-8799-59aeea395a9d","Type":"ContainerStarted","Data":"c0deee9170839dca2df9798e8d577d43c2574485fe48453541fb97ac60ce9c96"} Feb 18 20:23:17 crc kubenswrapper[5007]: I0218 20:23:17.556931 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"eeef0501-abfc-4d81-8799-59aeea395a9d","Type":"ContainerStarted","Data":"4f62fdf3d84b377a6970439ad9874601899989bd0940095b9c762b8b0360b9e1"} Feb 18 20:23:17 crc kubenswrapper[5007]: I0218 20:23:17.556947 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"eeef0501-abfc-4d81-8799-59aeea395a9d","Type":"ContainerStarted","Data":"f28feebbaf1656098d8ac6844c073d9c2226e37664143be7c3ade27acf2e5f8e"} Feb 18 20:23:17 crc kubenswrapper[5007]: I0218 20:23:17.560546 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"04aecc59-06d4-4479-a370-8cba2af7867b","Type":"ContainerStarted","Data":"d097b0c89c754b1527016e517362085647bb44267cc2ceeacc07f46551ceda3a"} Feb 18 20:23:17 crc kubenswrapper[5007]: I0218 20:23:17.560582 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"04aecc59-06d4-4479-a370-8cba2af7867b","Type":"ContainerStarted","Data":"706ab310e6b1a500c0797651c5f233d2efefc1b2ca590c72ba2eb0da4b916cb9"} Feb 18 20:23:17 crc kubenswrapper[5007]: I0218 20:23:17.564137 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"c24f7aa1-a329-46bf-8727-da5b751e71b4","Type":"ContainerStarted","Data":"42fb9868f75be314e6383dd181db40c6e321ef8767d51ffb661d5adb3d997347"} Feb 18 20:23:17 crc kubenswrapper[5007]: I0218 20:23:17.591623 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.591599237 podStartE2EDuration="2.591599237s" podCreationTimestamp="2026-02-18 20:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:23:17.579697111 +0000 UTC m=+2682.361816655" watchObservedRunningTime="2026-02-18 20:23:17.591599237 +0000 UTC m=+2682.373718781" Feb 18 20:23:17 crc kubenswrapper[5007]: I0218 20:23:17.602265 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.447229277 podStartE2EDuration="2.602246108s" podCreationTimestamp="2026-02-18 20:23:15 +0000 UTC" firstStartedPulling="2026-02-18 20:23:15.932262182 +0000 UTC m=+2680.714381706" lastFinishedPulling="2026-02-18 20:23:16.087279013 +0000 UTC m=+2680.869398537" observedRunningTime="2026-02-18 20:23:17.600351885 +0000 UTC m=+2682.382471409" watchObservedRunningTime="2026-02-18 20:23:17.602246108 +0000 UTC m=+2682.384365632" Feb 18 20:23:20 crc kubenswrapper[5007]: I0218 20:23:20.375590 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 18 20:23:20 crc kubenswrapper[5007]: I0218 20:23:20.530551 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:20 crc kubenswrapper[5007]: I0218 20:23:20.620318 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:23:25 crc kubenswrapper[5007]: I0218 20:23:25.536255 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 18 20:23:25 crc kubenswrapper[5007]: I0218 20:23:25.558941 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=10.312687591 podStartE2EDuration="10.55892419s" podCreationTimestamp="2026-02-18 20:23:15 +0000 UTC" firstStartedPulling="2026-02-18 20:23:16.291636158 +0000 UTC m=+2681.073755692" lastFinishedPulling="2026-02-18 20:23:16.537872767 +0000 UTC m=+2681.319992291" observedRunningTime="2026-02-18 20:23:17.638314618 +0000 UTC m=+2682.420434172" watchObservedRunningTime="2026-02-18 20:23:25.55892419 +0000 UTC m=+2690.341043714" Feb 18 20:23:25 crc kubenswrapper[5007]: I0218 20:23:25.694341 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 18 20:23:25 crc kubenswrapper[5007]: I0218 20:23:25.903040 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:24:15 crc kubenswrapper[5007]: I0218 20:24:15.663517 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:24:15 crc kubenswrapper[5007]: I0218 20:24:15.664251 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:24:23 crc kubenswrapper[5007]: I0218 20:24:23.407296 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:24:23 crc kubenswrapper[5007]: I0218 20:24:23.408146 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="prometheus" containerID="cri-o://8cd3df79fcd3bf1da6f2924afe014d1f12b702b4586117d889dc7aab808e10b0" gracePeriod=600 Feb 18 20:24:23 crc kubenswrapper[5007]: I0218 20:24:23.408238 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="thanos-sidecar" containerID="cri-o://fec0b89c24b9a631e73a6a5012ca4d7b7bb83658c59b1c99f48123b4c08244e3" gracePeriod=600 Feb 18 20:24:23 crc kubenswrapper[5007]: I0218 20:24:23.408291 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="config-reloader" containerID="cri-o://6899d09b427bdcc8aa8c975b2b37d0e9e34eac54fa330a7359811c8678982450" gracePeriod=600 Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.352674 5007 generic.go:334] "Generic (PLEG): container finished" podID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerID="fec0b89c24b9a631e73a6a5012ca4d7b7bb83658c59b1c99f48123b4c08244e3" exitCode=0 Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.353032 5007 generic.go:334] "Generic (PLEG): container finished" podID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerID="6899d09b427bdcc8aa8c975b2b37d0e9e34eac54fa330a7359811c8678982450" exitCode=0 Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.353051 5007 generic.go:334] "Generic (PLEG): container finished" podID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerID="8cd3df79fcd3bf1da6f2924afe014d1f12b702b4586117d889dc7aab808e10b0" exitCode=0 Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.353084 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerDied","Data":"fec0b89c24b9a631e73a6a5012ca4d7b7bb83658c59b1c99f48123b4c08244e3"} Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.353124 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerDied","Data":"6899d09b427bdcc8aa8c975b2b37d0e9e34eac54fa330a7359811c8678982450"} Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.353143 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerDied","Data":"8cd3df79fcd3bf1da6f2924afe014d1f12b702b4586117d889dc7aab808e10b0"} Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.569048 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691440 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config-out\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691549 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691624 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-2\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691684 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691750 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-0\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691807 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691845 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-1\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691873 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-thanos-prometheus-http-client-file\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691904 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqhgp\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-kube-api-access-qqhgp\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691938 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-tls-assets\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.691962 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-secret-combined-ca-bundle\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.692158 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.692205 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\" (UID: \"ad6bbbdf-51d7-4968-9e80-eed841e76e5c\") " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.693130 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.693302 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.693773 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.699185 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.699224 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.699936 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.700363 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config" (OuterVolumeSpecName: "config") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.700439 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-kube-api-access-qqhgp" (OuterVolumeSpecName: "kube-api-access-qqhgp") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "kube-api-access-qqhgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.702303 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.707632 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config-out" (OuterVolumeSpecName: "config-out") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.708083 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.729716 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "pvc-6186620d-f34e-4325-8bee-d656de7291fd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.788645 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config" (OuterVolumeSpecName: "web-config") pod "ad6bbbdf-51d7-4968-9e80-eed841e76e5c" (UID: "ad6bbbdf-51d7-4968-9e80-eed841e76e5c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795348 5007 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795382 5007 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795400 5007 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795417 5007 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795433 5007 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795465 5007 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795481 5007 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795498 5007 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795515 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqhgp\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-kube-api-access-qqhgp\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795528 5007 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795596 5007 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795635 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") on node \"crc\" " Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.795652 5007 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ad6bbbdf-51d7-4968-9e80-eed841e76e5c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.827496 5007 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.828485 5007 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6186620d-f34e-4325-8bee-d656de7291fd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd") on node "crc" Feb 18 20:24:24 crc kubenswrapper[5007]: I0218 20:24:24.897950 5007 reconciler_common.go:293] "Volume detached for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.371305 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ad6bbbdf-51d7-4968-9e80-eed841e76e5c","Type":"ContainerDied","Data":"6c78e36d92a09bb404e41fccda35e7f0b09a1c6166fc74ac719f06ab209f83ee"} Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.371676 5007 scope.go:117] "RemoveContainer" containerID="fec0b89c24b9a631e73a6a5012ca4d7b7bb83658c59b1c99f48123b4c08244e3" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.371528 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.404738 5007 scope.go:117] "RemoveContainer" containerID="6899d09b427bdcc8aa8c975b2b37d0e9e34eac54fa330a7359811c8678982450" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.440646 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.451951 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.459749 5007 scope.go:117] "RemoveContainer" containerID="8cd3df79fcd3bf1da6f2924afe014d1f12b702b4586117d889dc7aab808e10b0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.479114 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:24:25 crc kubenswrapper[5007]: E0218 20:24:25.479596 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="prometheus" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.479617 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="prometheus" Feb 18 20:24:25 crc kubenswrapper[5007]: E0218 20:24:25.479638 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="thanos-sidecar" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.479648 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="thanos-sidecar" Feb 18 20:24:25 crc kubenswrapper[5007]: E0218 20:24:25.479667 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="init-config-reloader" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.479676 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="init-config-reloader" Feb 18 20:24:25 crc kubenswrapper[5007]: E0218 20:24:25.479690 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="config-reloader" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.479698 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="config-reloader" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.479932 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="config-reloader" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.479948 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="prometheus" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.479974 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" containerName="thanos-sidecar" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.483615 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.486578 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.486653 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.486895 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.487072 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.487450 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.487486 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.487635 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4q7sz" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.495221 5007 scope.go:117] "RemoveContainer" containerID="122d1d7c28bd2f43f71548ba14d009c3a916998e5da057bef7e9685c47871213" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.496519 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.497125 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613381 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613439 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613513 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613561 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/549cb0fe-89a3-4154-8135-e953df4fa898-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613579 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613600 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613617 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/549cb0fe-89a3-4154-8135-e953df4fa898-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613640 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613674 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613702 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rl5\" (UniqueName: \"kubernetes.io/projected/549cb0fe-89a3-4154-8135-e953df4fa898-kube-api-access-64rl5\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613719 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-config\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613735 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.613767 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716198 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rl5\" (UniqueName: \"kubernetes.io/projected/549cb0fe-89a3-4154-8135-e953df4fa898-kube-api-access-64rl5\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716267 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-config\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716297 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716345 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716417 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716450 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716534 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716605 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/549cb0fe-89a3-4154-8135-e953df4fa898-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716633 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716663 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716687 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/549cb0fe-89a3-4154-8135-e953df4fa898-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716716 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.716763 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.717945 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.717974 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.718110 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/549cb0fe-89a3-4154-8135-e953df4fa898-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.721718 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.723236 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/549cb0fe-89a3-4154-8135-e953df4fa898-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.723629 5007 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.723639 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.723650 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1309d09147a960927b84a66d0aaca53d5d9063fe1f3c754674268f7a120fbef0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.723809 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.723873 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/549cb0fe-89a3-4154-8135-e953df4fa898-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.723961 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.727785 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-config\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.740657 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rl5\" (UniqueName: \"kubernetes.io/projected/549cb0fe-89a3-4154-8135-e953df4fa898-kube-api-access-64rl5\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.741822 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/549cb0fe-89a3-4154-8135-e953df4fa898-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.789200 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6186620d-f34e-4325-8bee-d656de7291fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6186620d-f34e-4325-8bee-d656de7291fd\") pod \"prometheus-metric-storage-0\" (UID: \"549cb0fe-89a3-4154-8135-e953df4fa898\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.825198 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:25 crc kubenswrapper[5007]: I0218 20:24:25.929930 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6bbbdf-51d7-4968-9e80-eed841e76e5c" path="/var/lib/kubelet/pods/ad6bbbdf-51d7-4968-9e80-eed841e76e5c/volumes" Feb 18 20:24:26 crc kubenswrapper[5007]: I0218 20:24:26.307756 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:24:26 crc kubenswrapper[5007]: I0218 20:24:26.390364 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"549cb0fe-89a3-4154-8135-e953df4fa898","Type":"ContainerStarted","Data":"4648c77d0e0d06855b91e8dfbbe4338f3f916b936ef0429c712bd4d2cf1e4bc6"} Feb 18 20:24:30 crc kubenswrapper[5007]: I0218 20:24:30.438509 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"549cb0fe-89a3-4154-8135-e953df4fa898","Type":"ContainerStarted","Data":"e8cae92972f8cfb30fe0c5c1d2846e256c82baaed160514b5f905c2d6a3cc1d0"} Feb 18 20:24:39 crc kubenswrapper[5007]: I0218 20:24:39.536197 5007 generic.go:334] "Generic (PLEG): container finished" podID="549cb0fe-89a3-4154-8135-e953df4fa898" containerID="e8cae92972f8cfb30fe0c5c1d2846e256c82baaed160514b5f905c2d6a3cc1d0" exitCode=0 Feb 18 20:24:39 crc kubenswrapper[5007]: I0218 20:24:39.536258 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"549cb0fe-89a3-4154-8135-e953df4fa898","Type":"ContainerDied","Data":"e8cae92972f8cfb30fe0c5c1d2846e256c82baaed160514b5f905c2d6a3cc1d0"} Feb 18 20:24:40 crc kubenswrapper[5007]: I0218 20:24:40.551459 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"549cb0fe-89a3-4154-8135-e953df4fa898","Type":"ContainerStarted","Data":"3e51f26e5fffe42201189cfeef0b47af7ec562fdc220c162ce80b090aa4b3243"} Feb 18 20:24:44 crc kubenswrapper[5007]: I0218 20:24:44.586571 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"549cb0fe-89a3-4154-8135-e953df4fa898","Type":"ContainerStarted","Data":"34b7f440f824dee5c48b8718248b9449c31d820d76b11e992c166e4903326544"} Feb 18 20:24:44 crc kubenswrapper[5007]: I0218 20:24:44.587043 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"549cb0fe-89a3-4154-8135-e953df4fa898","Type":"ContainerStarted","Data":"082bda34b8bc84db73221e7f113b0af1fa96ecad1c1f0a693e1c52290374bfa1"} Feb 18 20:24:44 crc kubenswrapper[5007]: I0218 20:24:44.623700 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.623669118 podStartE2EDuration="19.623669118s" podCreationTimestamp="2026-02-18 20:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:24:44.609997731 +0000 UTC m=+2769.392117315" watchObservedRunningTime="2026-02-18 20:24:44.623669118 +0000 UTC m=+2769.405788682" Feb 18 20:24:45 crc kubenswrapper[5007]: I0218 20:24:45.663843 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:24:45 crc kubenswrapper[5007]: I0218 20:24:45.663910 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:24:45 crc kubenswrapper[5007]: I0218 20:24:45.825814 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:55 crc kubenswrapper[5007]: I0218 20:24:55.825375 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:55 crc kubenswrapper[5007]: I0218 20:24:55.838412 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 20:24:56 crc kubenswrapper[5007]: I0218 20:24:56.725437 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.386751 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fl6z8"] Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.389689 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.404579 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fl6z8"] Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.565836 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-catalog-content\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.565929 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-utilities\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.565989 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4p4b\" (UniqueName: \"kubernetes.io/projected/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-kube-api-access-m4p4b\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.668265 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-catalog-content\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.668362 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-utilities\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.668422 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4p4b\" (UniqueName: \"kubernetes.io/projected/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-kube-api-access-m4p4b\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.669250 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-catalog-content\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.669481 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-utilities\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.689143 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4p4b\" (UniqueName: \"kubernetes.io/projected/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-kube-api-access-m4p4b\") pod \"redhat-operators-fl6z8\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:12 crc kubenswrapper[5007]: I0218 20:25:12.713073 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:13 crc kubenswrapper[5007]: I0218 20:25:13.222162 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fl6z8"] Feb 18 20:25:13 crc kubenswrapper[5007]: I0218 20:25:13.884646 5007 generic.go:334] "Generic (PLEG): container finished" podID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerID="4b896244dbb82a3f2800ea686af50f36c18169ae2925728b49f4dbbd680a6788" exitCode=0 Feb 18 20:25:13 crc kubenswrapper[5007]: I0218 20:25:13.884741 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl6z8" event={"ID":"c40f8e9d-bca0-44fc-b871-2d86a3ddd565","Type":"ContainerDied","Data":"4b896244dbb82a3f2800ea686af50f36c18169ae2925728b49f4dbbd680a6788"} Feb 18 20:25:13 crc kubenswrapper[5007]: I0218 20:25:13.884993 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl6z8" event={"ID":"c40f8e9d-bca0-44fc-b871-2d86a3ddd565","Type":"ContainerStarted","Data":"762b7bd0d15f82a9fbf28fcb538161df182eeaf099007184c05f87cc533b14a6"} Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.663510 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.663824 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.663868 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.664606 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ade2056cc529ad39f6325110d36bcbcbf391b085c57c1893fdaf86eed673e5be"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.664658 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://ade2056cc529ad39f6325110d36bcbcbf391b085c57c1893fdaf86eed673e5be" gracePeriod=600 Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.907755 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="ade2056cc529ad39f6325110d36bcbcbf391b085c57c1893fdaf86eed673e5be" exitCode=0 Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.907801 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"ade2056cc529ad39f6325110d36bcbcbf391b085c57c1893fdaf86eed673e5be"} Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.908866 5007 scope.go:117] "RemoveContainer" containerID="0b4a8259211f85780f4e64a2a72ab2c76cc994ffd47b3531cc6b698c09c1a23c" Feb 18 20:25:15 crc kubenswrapper[5007]: I0218 20:25:15.926658 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl6z8" event={"ID":"c40f8e9d-bca0-44fc-b871-2d86a3ddd565","Type":"ContainerStarted","Data":"440e2fd565cae3e3aed97d0463ca3abd06c351e148bf95f89855824ac9594f31"} Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.681809 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.683300 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.686518 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.686922 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-45s97" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.688060 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.688131 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.717460 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.751303 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.751472 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.751518 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-config-data\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.751546 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.751619 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2gl\" (UniqueName: \"kubernetes.io/projected/922c00a0-f93b-4c56-8597-10458c861c67-kube-api-access-vm2gl\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.751853 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.751952 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.752027 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.752065 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854299 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854356 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854388 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854412 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854455 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854517 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854542 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-config-data\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854645 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.854695 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2gl\" (UniqueName: \"kubernetes.io/projected/922c00a0-f93b-4c56-8597-10458c861c67-kube-api-access-vm2gl\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.855050 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.855210 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.855621 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.856243 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.856764 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-config-data\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.860548 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.860647 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.862769 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.870632 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2gl\" (UniqueName: \"kubernetes.io/projected/922c00a0-f93b-4c56-8597-10458c861c67-kube-api-access-vm2gl\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.893776 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " pod="openstack/tempest-tests-tempest" Feb 18 20:25:16 crc kubenswrapper[5007]: I0218 20:25:16.931248 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4"} Feb 18 20:25:17 crc kubenswrapper[5007]: I0218 20:25:17.014522 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:25:17 crc kubenswrapper[5007]: I0218 20:25:17.489921 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:25:17 crc kubenswrapper[5007]: I0218 20:25:17.938813 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"922c00a0-f93b-4c56-8597-10458c861c67","Type":"ContainerStarted","Data":"7572b539d40f2ddeaaefbaeafd9fee84cff175583af743a90ba13804b2150dce"} Feb 18 20:25:30 crc kubenswrapper[5007]: I0218 20:25:30.158720 5007 generic.go:334] "Generic (PLEG): container finished" podID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerID="440e2fd565cae3e3aed97d0463ca3abd06c351e148bf95f89855824ac9594f31" exitCode=0 Feb 18 20:25:30 crc kubenswrapper[5007]: I0218 20:25:30.158787 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl6z8" event={"ID":"c40f8e9d-bca0-44fc-b871-2d86a3ddd565","Type":"ContainerDied","Data":"440e2fd565cae3e3aed97d0463ca3abd06c351e148bf95f89855824ac9594f31"} Feb 18 20:25:32 crc kubenswrapper[5007]: I0218 20:25:32.247103 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"922c00a0-f93b-4c56-8597-10458c861c67","Type":"ContainerStarted","Data":"68b7e88ea273b8cfab5aaa2e08552dc834856ed4d2f554422fc0050c06c9857f"} Feb 18 20:25:32 crc kubenswrapper[5007]: I0218 20:25:32.288038 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.254985079 podStartE2EDuration="17.288017967s" podCreationTimestamp="2026-02-18 20:25:15 +0000 UTC" firstStartedPulling="2026-02-18 20:25:17.476663399 +0000 UTC m=+2802.258782923" lastFinishedPulling="2026-02-18 20:25:30.509696247 +0000 UTC m=+2815.291815811" observedRunningTime="2026-02-18 20:25:32.269806952 +0000 UTC m=+2817.051926496" watchObservedRunningTime="2026-02-18 20:25:32.288017967 +0000 UTC m=+2817.070137491" Feb 18 20:25:33 crc kubenswrapper[5007]: I0218 20:25:33.269261 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl6z8" event={"ID":"c40f8e9d-bca0-44fc-b871-2d86a3ddd565","Type":"ContainerStarted","Data":"a8b9bec21f97d6d3a141196e79ff412fae0f1435e54123afb3da764e214a3f17"} Feb 18 20:25:33 crc kubenswrapper[5007]: I0218 20:25:33.292751 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fl6z8" podStartSLOduration=2.8735458 podStartE2EDuration="21.292722101s" podCreationTimestamp="2026-02-18 20:25:12 +0000 UTC" firstStartedPulling="2026-02-18 20:25:13.886573736 +0000 UTC m=+2798.668693260" lastFinishedPulling="2026-02-18 20:25:32.305750037 +0000 UTC m=+2817.087869561" observedRunningTime="2026-02-18 20:25:33.287995147 +0000 UTC m=+2818.070114701" watchObservedRunningTime="2026-02-18 20:25:33.292722101 +0000 UTC m=+2818.074841635" Feb 18 20:25:42 crc kubenswrapper[5007]: I0218 20:25:42.713508 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:42 crc kubenswrapper[5007]: I0218 20:25:42.714202 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:42 crc kubenswrapper[5007]: I0218 20:25:42.763798 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:43 crc kubenswrapper[5007]: I0218 20:25:43.447494 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:43 crc kubenswrapper[5007]: I0218 20:25:43.595111 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fl6z8"] Feb 18 20:25:45 crc kubenswrapper[5007]: I0218 20:25:45.405955 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fl6z8" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerName="registry-server" containerID="cri-o://a8b9bec21f97d6d3a141196e79ff412fae0f1435e54123afb3da764e214a3f17" gracePeriod=2 Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.419055 5007 generic.go:334] "Generic (PLEG): container finished" podID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerID="a8b9bec21f97d6d3a141196e79ff412fae0f1435e54123afb3da764e214a3f17" exitCode=0 Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.419146 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl6z8" event={"ID":"c40f8e9d-bca0-44fc-b871-2d86a3ddd565","Type":"ContainerDied","Data":"a8b9bec21f97d6d3a141196e79ff412fae0f1435e54123afb3da764e214a3f17"} Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.419939 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fl6z8" event={"ID":"c40f8e9d-bca0-44fc-b871-2d86a3ddd565","Type":"ContainerDied","Data":"762b7bd0d15f82a9fbf28fcb538161df182eeaf099007184c05f87cc533b14a6"} Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.419962 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="762b7bd0d15f82a9fbf28fcb538161df182eeaf099007184c05f87cc533b14a6" Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.461288 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.655530 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-utilities\") pod \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.655647 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4p4b\" (UniqueName: \"kubernetes.io/projected/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-kube-api-access-m4p4b\") pod \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.655680 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-catalog-content\") pod \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\" (UID: \"c40f8e9d-bca0-44fc-b871-2d86a3ddd565\") " Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.658630 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-utilities" (OuterVolumeSpecName: "utilities") pod "c40f8e9d-bca0-44fc-b871-2d86a3ddd565" (UID: "c40f8e9d-bca0-44fc-b871-2d86a3ddd565"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.666836 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-kube-api-access-m4p4b" (OuterVolumeSpecName: "kube-api-access-m4p4b") pod "c40f8e9d-bca0-44fc-b871-2d86a3ddd565" (UID: "c40f8e9d-bca0-44fc-b871-2d86a3ddd565"). InnerVolumeSpecName "kube-api-access-m4p4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.759101 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.759579 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4p4b\" (UniqueName: \"kubernetes.io/projected/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-kube-api-access-m4p4b\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.792072 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c40f8e9d-bca0-44fc-b871-2d86a3ddd565" (UID: "c40f8e9d-bca0-44fc-b871-2d86a3ddd565"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:25:46 crc kubenswrapper[5007]: I0218 20:25:46.862950 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40f8e9d-bca0-44fc-b871-2d86a3ddd565-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:47 crc kubenswrapper[5007]: I0218 20:25:47.433915 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fl6z8" Feb 18 20:25:47 crc kubenswrapper[5007]: I0218 20:25:47.484841 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fl6z8"] Feb 18 20:25:47 crc kubenswrapper[5007]: I0218 20:25:47.499316 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fl6z8"] Feb 18 20:25:47 crc kubenswrapper[5007]: I0218 20:25:47.921660 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" path="/var/lib/kubelet/pods/c40f8e9d-bca0-44fc-b871-2d86a3ddd565/volumes" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.405218 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gcjf2"] Feb 18 20:26:51 crc kubenswrapper[5007]: E0218 20:26:51.406365 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerName="extract-content" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.406381 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerName="extract-content" Feb 18 20:26:51 crc kubenswrapper[5007]: E0218 20:26:51.406421 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerName="extract-utilities" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.406445 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerName="extract-utilities" Feb 18 20:26:51 crc kubenswrapper[5007]: E0218 20:26:51.406466 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerName="registry-server" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.406474 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerName="registry-server" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.406694 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40f8e9d-bca0-44fc-b871-2d86a3ddd565" containerName="registry-server" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.407983 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.423728 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcjf2"] Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.549144 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-utilities\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.549252 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-catalog-content\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.549708 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx5j4\" (UniqueName: \"kubernetes.io/projected/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-kube-api-access-lx5j4\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.651449 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx5j4\" (UniqueName: \"kubernetes.io/projected/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-kube-api-access-lx5j4\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.651856 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-utilities\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.651970 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-catalog-content\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.652615 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-catalog-content\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.652708 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-utilities\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.679682 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx5j4\" (UniqueName: \"kubernetes.io/projected/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-kube-api-access-lx5j4\") pod \"redhat-marketplace-gcjf2\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:51 crc kubenswrapper[5007]: I0218 20:26:51.749144 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:26:52 crc kubenswrapper[5007]: I0218 20:26:52.273545 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcjf2"] Feb 18 20:26:53 crc kubenswrapper[5007]: I0218 20:26:53.166752 5007 generic.go:334] "Generic (PLEG): container finished" podID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerID="f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9" exitCode=0 Feb 18 20:26:53 crc kubenswrapper[5007]: I0218 20:26:53.167211 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcjf2" event={"ID":"f74fdaf9-4648-4a07-8933-038a7d4c8d0f","Type":"ContainerDied","Data":"f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9"} Feb 18 20:26:53 crc kubenswrapper[5007]: I0218 20:26:53.167528 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcjf2" event={"ID":"f74fdaf9-4648-4a07-8933-038a7d4c8d0f","Type":"ContainerStarted","Data":"6e3f2e14377705676ae9eb7ac402701a1245a7e9440816cacdee7f176d055925"} Feb 18 20:26:54 crc kubenswrapper[5007]: I0218 20:26:54.184707 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcjf2" event={"ID":"f74fdaf9-4648-4a07-8933-038a7d4c8d0f","Type":"ContainerStarted","Data":"98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2"} Feb 18 20:26:55 crc kubenswrapper[5007]: I0218 20:26:55.202600 5007 generic.go:334] "Generic (PLEG): container finished" podID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerID="98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2" exitCode=0 Feb 18 20:26:55 crc kubenswrapper[5007]: I0218 20:26:55.203745 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcjf2" event={"ID":"f74fdaf9-4648-4a07-8933-038a7d4c8d0f","Type":"ContainerDied","Data":"98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2"} Feb 18 20:26:56 crc kubenswrapper[5007]: I0218 20:26:56.216908 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcjf2" event={"ID":"f74fdaf9-4648-4a07-8933-038a7d4c8d0f","Type":"ContainerStarted","Data":"ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d"} Feb 18 20:26:56 crc kubenswrapper[5007]: I0218 20:26:56.247726 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gcjf2" podStartSLOduration=2.802702144 podStartE2EDuration="5.247689764s" podCreationTimestamp="2026-02-18 20:26:51 +0000 UTC" firstStartedPulling="2026-02-18 20:26:53.170736053 +0000 UTC m=+2897.952855587" lastFinishedPulling="2026-02-18 20:26:55.615723633 +0000 UTC m=+2900.397843207" observedRunningTime="2026-02-18 20:26:56.24262014 +0000 UTC m=+2901.024739694" watchObservedRunningTime="2026-02-18 20:26:56.247689764 +0000 UTC m=+2901.029809328" Feb 18 20:27:00 crc kubenswrapper[5007]: E0218 20:27:00.988676 5007 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.08s" Feb 18 20:27:01 crc kubenswrapper[5007]: I0218 20:27:01.749394 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:27:01 crc kubenswrapper[5007]: I0218 20:27:01.749454 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:27:01 crc kubenswrapper[5007]: I0218 20:27:01.801111 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:27:02 crc kubenswrapper[5007]: I0218 20:27:02.055144 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:27:02 crc kubenswrapper[5007]: I0218 20:27:02.111875 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcjf2"] Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.032315 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gcjf2" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerName="registry-server" containerID="cri-o://ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d" gracePeriod=2 Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.586687 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.743174 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx5j4\" (UniqueName: \"kubernetes.io/projected/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-kube-api-access-lx5j4\") pod \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.743407 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-utilities\") pod \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.743550 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-catalog-content\") pod \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\" (UID: \"f74fdaf9-4648-4a07-8933-038a7d4c8d0f\") " Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.744633 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-utilities" (OuterVolumeSpecName: "utilities") pod "f74fdaf9-4648-4a07-8933-038a7d4c8d0f" (UID: "f74fdaf9-4648-4a07-8933-038a7d4c8d0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.759548 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-kube-api-access-lx5j4" (OuterVolumeSpecName: "kube-api-access-lx5j4") pod "f74fdaf9-4648-4a07-8933-038a7d4c8d0f" (UID: "f74fdaf9-4648-4a07-8933-038a7d4c8d0f"). InnerVolumeSpecName "kube-api-access-lx5j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.772014 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f74fdaf9-4648-4a07-8933-038a7d4c8d0f" (UID: "f74fdaf9-4648-4a07-8933-038a7d4c8d0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.846070 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx5j4\" (UniqueName: \"kubernetes.io/projected/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-kube-api-access-lx5j4\") on node \"crc\" DevicePath \"\"" Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.846108 5007 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:27:04 crc kubenswrapper[5007]: I0218 20:27:04.846119 5007 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f74fdaf9-4648-4a07-8933-038a7d4c8d0f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.048114 5007 generic.go:334] "Generic (PLEG): container finished" podID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerID="ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d" exitCode=0 Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.048233 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gcjf2" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.048263 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcjf2" event={"ID":"f74fdaf9-4648-4a07-8933-038a7d4c8d0f","Type":"ContainerDied","Data":"ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d"} Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.050336 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gcjf2" event={"ID":"f74fdaf9-4648-4a07-8933-038a7d4c8d0f","Type":"ContainerDied","Data":"6e3f2e14377705676ae9eb7ac402701a1245a7e9440816cacdee7f176d055925"} Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.050383 5007 scope.go:117] "RemoveContainer" containerID="ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.095945 5007 scope.go:117] "RemoveContainer" containerID="98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.099099 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcjf2"] Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.116489 5007 scope.go:117] "RemoveContainer" containerID="f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.116496 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gcjf2"] Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.177651 5007 scope.go:117] "RemoveContainer" containerID="ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d" Feb 18 20:27:05 crc kubenswrapper[5007]: E0218 20:27:05.178226 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d\": container with ID starting with ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d not found: ID does not exist" containerID="ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.178280 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d"} err="failed to get container status \"ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d\": rpc error: code = NotFound desc = could not find container \"ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d\": container with ID starting with ba4425e4f3c27a49c250769ac83c26a6946cbdfaa6efcae1476d49c363dae36d not found: ID does not exist" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.178312 5007 scope.go:117] "RemoveContainer" containerID="98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2" Feb 18 20:27:05 crc kubenswrapper[5007]: E0218 20:27:05.178823 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2\": container with ID starting with 98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2 not found: ID does not exist" containerID="98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.178854 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2"} err="failed to get container status \"98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2\": rpc error: code = NotFound desc = could not find container \"98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2\": container with ID starting with 98cb35390136ca83bb89baed88d9268d8892a61f077f0b2cfe98e42929ef8cc2 not found: ID does not exist" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.178872 5007 scope.go:117] "RemoveContainer" containerID="f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9" Feb 18 20:27:05 crc kubenswrapper[5007]: E0218 20:27:05.179393 5007 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9\": container with ID starting with f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9 not found: ID does not exist" containerID="f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.179461 5007 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9"} err="failed to get container status \"f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9\": rpc error: code = NotFound desc = could not find container \"f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9\": container with ID starting with f7e85b8f5c3f8a395dec96784cc974fd8b38614a8acec94b6c9f5ceaae11a2e9 not found: ID does not exist" Feb 18 20:27:05 crc kubenswrapper[5007]: I0218 20:27:05.932615 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" path="/var/lib/kubelet/pods/f74fdaf9-4648-4a07-8933-038a7d4c8d0f/volumes" Feb 18 20:27:27 crc kubenswrapper[5007]: I0218 20:27:27.847938 5007 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 20:27:27 crc kubenswrapper[5007]: I0218 20:27:27.848411 5007 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 20:27:45 crc kubenswrapper[5007]: I0218 20:27:45.663903 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:27:45 crc kubenswrapper[5007]: I0218 20:27:45.665138 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:28:15 crc kubenswrapper[5007]: I0218 20:28:15.664372 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:28:15 crc kubenswrapper[5007]: I0218 20:28:15.665561 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:28:45 crc kubenswrapper[5007]: I0218 20:28:45.665142 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:28:45 crc kubenswrapper[5007]: I0218 20:28:45.665788 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:28:45 crc kubenswrapper[5007]: I0218 20:28:45.665855 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:28:45 crc kubenswrapper[5007]: I0218 20:28:45.666676 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:28:45 crc kubenswrapper[5007]: I0218 20:28:45.666769 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" gracePeriod=600 Feb 18 20:28:45 crc kubenswrapper[5007]: E0218 20:28:45.796638 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:28:46 crc kubenswrapper[5007]: I0218 20:28:46.790156 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" exitCode=0 Feb 18 20:28:46 crc kubenswrapper[5007]: I0218 20:28:46.790240 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4"} Feb 18 20:28:46 crc kubenswrapper[5007]: I0218 20:28:46.790506 5007 scope.go:117] "RemoveContainer" containerID="ade2056cc529ad39f6325110d36bcbcbf391b085c57c1893fdaf86eed673e5be" Feb 18 20:28:46 crc kubenswrapper[5007]: I0218 20:28:46.791520 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:28:46 crc kubenswrapper[5007]: E0218 20:28:46.792072 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:29:01 crc kubenswrapper[5007]: I0218 20:29:01.909627 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:29:01 crc kubenswrapper[5007]: E0218 20:29:01.910674 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:29:14 crc kubenswrapper[5007]: I0218 20:29:14.910141 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:29:14 crc kubenswrapper[5007]: E0218 20:29:14.911203 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.290035 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t5gwr"] Feb 18 20:29:28 crc kubenswrapper[5007]: E0218 20:29:28.290962 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerName="extract-utilities" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.290975 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerName="extract-utilities" Feb 18 20:29:28 crc kubenswrapper[5007]: E0218 20:29:28.290997 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerName="extract-content" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.291003 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerName="extract-content" Feb 18 20:29:28 crc kubenswrapper[5007]: E0218 20:29:28.291022 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerName="registry-server" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.291030 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerName="registry-server" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.291243 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="f74fdaf9-4648-4a07-8933-038a7d4c8d0f" containerName="registry-server" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.292859 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.323419 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5gwr"] Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.462230 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-catalog-content\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.462745 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-utilities\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.462806 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmkz\" (UniqueName: \"kubernetes.io/projected/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-kube-api-access-bnmkz\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.564282 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmkz\" (UniqueName: \"kubernetes.io/projected/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-kube-api-access-bnmkz\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.564399 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-catalog-content\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.564522 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-utilities\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.564946 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-catalog-content\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.564996 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-utilities\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.585564 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmkz\" (UniqueName: \"kubernetes.io/projected/d79bddda-80b1-41bc-92e9-1d41a7c7c84a-kube-api-access-bnmkz\") pod \"certified-operators-t5gwr\" (UID: \"d79bddda-80b1-41bc-92e9-1d41a7c7c84a\") " pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:28 crc kubenswrapper[5007]: I0218 20:29:28.615228 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 20:29:29 crc kubenswrapper[5007]: I0218 20:29:29.136639 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5gwr"] Feb 18 20:29:29 crc kubenswrapper[5007]: I0218 20:29:29.647134 5007 generic.go:334] "Generic (PLEG): container finished" podID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" containerID="64100ec52478a8486ee7640cf6bfea62333c3e969bc092ea828788da9f29ef28" exitCode=0 Feb 18 20:29:29 crc kubenswrapper[5007]: I0218 20:29:29.647213 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5gwr" event={"ID":"d79bddda-80b1-41bc-92e9-1d41a7c7c84a","Type":"ContainerDied","Data":"64100ec52478a8486ee7640cf6bfea62333c3e969bc092ea828788da9f29ef28"} Feb 18 20:29:29 crc kubenswrapper[5007]: I0218 20:29:29.647282 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5gwr" event={"ID":"d79bddda-80b1-41bc-92e9-1d41a7c7c84a","Type":"ContainerStarted","Data":"09e6fba5cc3f7ac2532a3e8012ac1b08ded92258b74a8e0d701f92a163183462"} Feb 18 20:29:29 crc kubenswrapper[5007]: I0218 20:29:29.649737 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:29:29 crc kubenswrapper[5007]: I0218 20:29:29.910121 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:29:29 crc kubenswrapper[5007]: E0218 20:29:29.910649 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:29:30 crc kubenswrapper[5007]: I0218 20:29:30.658087 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5gwr" event={"ID":"d79bddda-80b1-41bc-92e9-1d41a7c7c84a","Type":"ContainerStarted","Data":"fce67048e19519c184a33771b2327507a0c62f1c6d1af9f47cf2d2c0c19f58c7"} Feb 18 20:29:32 crc kubenswrapper[5007]: I0218 20:29:32.675497 5007 generic.go:334] "Generic (PLEG): container finished" podID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" containerID="fce67048e19519c184a33771b2327507a0c62f1c6d1af9f47cf2d2c0c19f58c7" exitCode=0 Feb 18 20:29:32 crc kubenswrapper[5007]: I0218 20:29:32.675547 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5gwr" event={"ID":"d79bddda-80b1-41bc-92e9-1d41a7c7c84a","Type":"ContainerDied","Data":"fce67048e19519c184a33771b2327507a0c62f1c6d1af9f47cf2d2c0c19f58c7"} Feb 18 20:29:33 crc kubenswrapper[5007]: E0218 20:29:33.012423 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:29:33 crc kubenswrapper[5007]: E0218 20:29:33.012595 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:29:33 crc kubenswrapper[5007]: E0218 20:29:33.013731 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.350868 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhwn8"] Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.355215 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.392457 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhwn8"] Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.447881 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh4h\" (UniqueName: \"kubernetes.io/projected/9b961f2a-acd6-470c-a5bb-3531e607c818-kube-api-access-4sh4h\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.448001 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b961f2a-acd6-470c-a5bb-3531e607c818-catalog-content\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.448107 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b961f2a-acd6-470c-a5bb-3531e607c818-utilities\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.550371 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b961f2a-acd6-470c-a5bb-3531e607c818-utilities\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.550829 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b961f2a-acd6-470c-a5bb-3531e607c818-utilities\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.550930 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sh4h\" (UniqueName: \"kubernetes.io/projected/9b961f2a-acd6-470c-a5bb-3531e607c818-kube-api-access-4sh4h\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.551019 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b961f2a-acd6-470c-a5bb-3531e607c818-catalog-content\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.551322 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b961f2a-acd6-470c-a5bb-3531e607c818-catalog-content\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.571456 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sh4h\" (UniqueName: \"kubernetes.io/projected/9b961f2a-acd6-470c-a5bb-3531e607c818-kube-api-access-4sh4h\") pod \"community-operators-mhwn8\" (UID: \"9b961f2a-acd6-470c-a5bb-3531e607c818\") " pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:36 crc kubenswrapper[5007]: I0218 20:29:36.710743 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhwn8" Feb 18 20:29:37 crc kubenswrapper[5007]: I0218 20:29:37.239819 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhwn8"] Feb 18 20:29:37 crc kubenswrapper[5007]: I0218 20:29:37.722763 5007 generic.go:334] "Generic (PLEG): container finished" podID="9b961f2a-acd6-470c-a5bb-3531e607c818" containerID="66780d97992adb75fca40fd718de8d5133c465d8193c279cd424c402d35c39a0" exitCode=0 Feb 18 20:29:37 crc kubenswrapper[5007]: I0218 20:29:37.722863 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwn8" event={"ID":"9b961f2a-acd6-470c-a5bb-3531e607c818","Type":"ContainerDied","Data":"66780d97992adb75fca40fd718de8d5133c465d8193c279cd424c402d35c39a0"} Feb 18 20:29:37 crc kubenswrapper[5007]: I0218 20:29:37.723375 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwn8" event={"ID":"9b961f2a-acd6-470c-a5bb-3531e607c818","Type":"ContainerStarted","Data":"b9cecb59b47c21e6fcd256e90bc8eaf8c4d19f745628b6c7ca1dccd51e7d8159"} Feb 18 20:29:39 crc kubenswrapper[5007]: I0218 20:29:39.752818 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwn8" event={"ID":"9b961f2a-acd6-470c-a5bb-3531e607c818","Type":"ContainerStarted","Data":"a693b4e0d17c68da0835304f43214cfdbbddfea7d98c7dd753bcef243f7ee0d6"} Feb 18 20:29:40 crc kubenswrapper[5007]: I0218 20:29:40.765979 5007 generic.go:334] "Generic (PLEG): container finished" podID="9b961f2a-acd6-470c-a5bb-3531e607c818" containerID="a693b4e0d17c68da0835304f43214cfdbbddfea7d98c7dd753bcef243f7ee0d6" exitCode=0 Feb 18 20:29:40 crc kubenswrapper[5007]: I0218 20:29:40.766088 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwn8" event={"ID":"9b961f2a-acd6-470c-a5bb-3531e607c818","Type":"ContainerDied","Data":"a693b4e0d17c68da0835304f43214cfdbbddfea7d98c7dd753bcef243f7ee0d6"} Feb 18 20:29:41 crc kubenswrapper[5007]: E0218 20:29:41.184237 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:29:41 crc kubenswrapper[5007]: E0218 20:29:41.184393 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:29:41 crc kubenswrapper[5007]: E0218 20:29:41.185616 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:29:41 crc kubenswrapper[5007]: E0218 20:29:41.780282 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:29:43 crc kubenswrapper[5007]: I0218 20:29:43.909275 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:29:43 crc kubenswrapper[5007]: E0218 20:29:43.909836 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:29:45 crc kubenswrapper[5007]: E0218 20:29:45.252073 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:29:45 crc kubenswrapper[5007]: E0218 20:29:45.252574 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:29:45 crc kubenswrapper[5007]: E0218 20:29:45.253818 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:29:52 crc kubenswrapper[5007]: E0218 20:29:52.238694 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:29:52 crc kubenswrapper[5007]: E0218 20:29:52.239583 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:29:52 crc kubenswrapper[5007]: E0218 20:29:52.241457 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:29:55 crc kubenswrapper[5007]: I0218 20:29:55.915541 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:29:55 crc kubenswrapper[5007]: E0218 20:29:55.916015 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.172053 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7"] Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.174823 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.179913 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.180514 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.192105 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7"] Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.349027 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42b0739c-53b5-4133-87d5-d9dd47f5131b-secret-volume\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.349075 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bk7w\" (UniqueName: \"kubernetes.io/projected/42b0739c-53b5-4133-87d5-d9dd47f5131b-kube-api-access-2bk7w\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.349186 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42b0739c-53b5-4133-87d5-d9dd47f5131b-config-volume\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.452055 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42b0739c-53b5-4133-87d5-d9dd47f5131b-secret-volume\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.452126 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bk7w\" (UniqueName: \"kubernetes.io/projected/42b0739c-53b5-4133-87d5-d9dd47f5131b-kube-api-access-2bk7w\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.452307 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42b0739c-53b5-4133-87d5-d9dd47f5131b-config-volume\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.454063 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42b0739c-53b5-4133-87d5-d9dd47f5131b-config-volume\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.464729 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42b0739c-53b5-4133-87d5-d9dd47f5131b-secret-volume\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.472180 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bk7w\" (UniqueName: \"kubernetes.io/projected/42b0739c-53b5-4133-87d5-d9dd47f5131b-kube-api-access-2bk7w\") pod \"collect-profiles-29524110-zvsk7\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: I0218 20:30:00.541876 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:00 crc kubenswrapper[5007]: E0218 20:30:00.927582 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:30:01 crc kubenswrapper[5007]: I0218 20:30:01.081799 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7"] Feb 18 20:30:02 crc kubenswrapper[5007]: I0218 20:30:02.043279 5007 generic.go:334] "Generic (PLEG): container finished" podID="42b0739c-53b5-4133-87d5-d9dd47f5131b" containerID="2b54f576158514c4295c9c49288c5ab95a5325b90c4c20baef6fb8056d8aefaf" exitCode=0 Feb 18 20:30:02 crc kubenswrapper[5007]: I0218 20:30:02.043590 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" event={"ID":"42b0739c-53b5-4133-87d5-d9dd47f5131b","Type":"ContainerDied","Data":"2b54f576158514c4295c9c49288c5ab95a5325b90c4c20baef6fb8056d8aefaf"} Feb 18 20:30:02 crc kubenswrapper[5007]: I0218 20:30:02.043619 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" event={"ID":"42b0739c-53b5-4133-87d5-d9dd47f5131b","Type":"ContainerStarted","Data":"a080e16f1b2ca35cc89335c42b38b4d41195014a6b21c026729f9fdb8d0e70ed"} Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.447152 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.558970 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bk7w\" (UniqueName: \"kubernetes.io/projected/42b0739c-53b5-4133-87d5-d9dd47f5131b-kube-api-access-2bk7w\") pod \"42b0739c-53b5-4133-87d5-d9dd47f5131b\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.559025 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42b0739c-53b5-4133-87d5-d9dd47f5131b-config-volume\") pod \"42b0739c-53b5-4133-87d5-d9dd47f5131b\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.559057 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42b0739c-53b5-4133-87d5-d9dd47f5131b-secret-volume\") pod \"42b0739c-53b5-4133-87d5-d9dd47f5131b\" (UID: \"42b0739c-53b5-4133-87d5-d9dd47f5131b\") " Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.560027 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b0739c-53b5-4133-87d5-d9dd47f5131b-config-volume" (OuterVolumeSpecName: "config-volume") pod "42b0739c-53b5-4133-87d5-d9dd47f5131b" (UID: "42b0739c-53b5-4133-87d5-d9dd47f5131b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.567742 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b0739c-53b5-4133-87d5-d9dd47f5131b-kube-api-access-2bk7w" (OuterVolumeSpecName: "kube-api-access-2bk7w") pod "42b0739c-53b5-4133-87d5-d9dd47f5131b" (UID: "42b0739c-53b5-4133-87d5-d9dd47f5131b"). InnerVolumeSpecName "kube-api-access-2bk7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.569955 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b0739c-53b5-4133-87d5-d9dd47f5131b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "42b0739c-53b5-4133-87d5-d9dd47f5131b" (UID: "42b0739c-53b5-4133-87d5-d9dd47f5131b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.661884 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bk7w\" (UniqueName: \"kubernetes.io/projected/42b0739c-53b5-4133-87d5-d9dd47f5131b-kube-api-access-2bk7w\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.661921 5007 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42b0739c-53b5-4133-87d5-d9dd47f5131b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[5007]: I0218 20:30:03.661933 5007 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42b0739c-53b5-4133-87d5-d9dd47f5131b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[5007]: E0218 20:30:03.912361 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:30:04 crc kubenswrapper[5007]: I0218 20:30:04.062994 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" event={"ID":"42b0739c-53b5-4133-87d5-d9dd47f5131b","Type":"ContainerDied","Data":"a080e16f1b2ca35cc89335c42b38b4d41195014a6b21c026729f9fdb8d0e70ed"} Feb 18 20:30:04 crc kubenswrapper[5007]: I0218 20:30:04.063039 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a080e16f1b2ca35cc89335c42b38b4d41195014a6b21c026729f9fdb8d0e70ed" Feb 18 20:30:04 crc kubenswrapper[5007]: I0218 20:30:04.063053 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-zvsk7" Feb 18 20:30:04 crc kubenswrapper[5007]: I0218 20:30:04.548961 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc"] Feb 18 20:30:04 crc kubenswrapper[5007]: I0218 20:30:04.559470 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-nkgnc"] Feb 18 20:30:05 crc kubenswrapper[5007]: I0218 20:30:05.928599 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674" path="/var/lib/kubelet/pods/5e0a8a78-95da-4a3f-8e8d-e4a79a3f7674/volumes" Feb 18 20:30:07 crc kubenswrapper[5007]: I0218 20:30:07.909666 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:30:07 crc kubenswrapper[5007]: E0218 20:30:07.910489 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:30:15 crc kubenswrapper[5007]: E0218 20:30:15.272051 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:30:15 crc kubenswrapper[5007]: E0218 20:30:15.272830 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:15 crc kubenswrapper[5007]: E0218 20:30:15.274379 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:30:15 crc kubenswrapper[5007]: E0218 20:30:15.438848 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:30:15 crc kubenswrapper[5007]: E0218 20:30:15.439051 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:15 crc kubenswrapper[5007]: E0218 20:30:15.440653 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:30:21 crc kubenswrapper[5007]: I0218 20:30:21.910369 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:30:21 crc kubenswrapper[5007]: E0218 20:30:21.911360 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:30:28 crc kubenswrapper[5007]: E0218 20:30:28.911740 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:30:28 crc kubenswrapper[5007]: E0218 20:30:28.915050 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:30:34 crc kubenswrapper[5007]: I0218 20:30:34.910064 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:30:34 crc kubenswrapper[5007]: E0218 20:30:34.912283 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:30:39 crc kubenswrapper[5007]: E0218 20:30:39.913206 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:30:40 crc kubenswrapper[5007]: E0218 20:30:40.913552 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:30:45 crc kubenswrapper[5007]: I0218 20:30:45.917500 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:30:45 crc kubenswrapper[5007]: E0218 20:30:45.918258 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:30:52 crc kubenswrapper[5007]: E0218 20:30:52.914063 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:30:52 crc kubenswrapper[5007]: E0218 20:30:52.914084 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:30:55 crc kubenswrapper[5007]: I0218 20:30:55.421552 5007 scope.go:117] "RemoveContainer" containerID="31ee5ec62d70cf7255ac7c4d7b0ee6da8856602f7ec997e9127bbcd3dd1bbab9" Feb 18 20:30:57 crc kubenswrapper[5007]: I0218 20:30:57.910341 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:30:57 crc kubenswrapper[5007]: E0218 20:30:57.911244 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:31:06 crc kubenswrapper[5007]: E0218 20:31:06.549020 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:31:06 crc kubenswrapper[5007]: E0218 20:31:06.549887 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:06 crc kubenswrapper[5007]: E0218 20:31:06.551039 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:31:07 crc kubenswrapper[5007]: E0218 20:31:07.311400 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:31:07 crc kubenswrapper[5007]: E0218 20:31:07.312202 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:07 crc kubenswrapper[5007]: E0218 20:31:07.313483 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:31:12 crc kubenswrapper[5007]: I0218 20:31:12.910026 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:31:12 crc kubenswrapper[5007]: E0218 20:31:12.910806 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:31:17 crc kubenswrapper[5007]: E0218 20:31:17.912183 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:31:20 crc kubenswrapper[5007]: E0218 20:31:20.912148 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:31:23 crc kubenswrapper[5007]: I0218 20:31:23.910792 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:31:23 crc kubenswrapper[5007]: E0218 20:31:23.911643 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:31:29 crc kubenswrapper[5007]: E0218 20:31:29.915471 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:31:33 crc kubenswrapper[5007]: E0218 20:31:33.912943 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:31:36 crc kubenswrapper[5007]: I0218 20:31:36.910419 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:31:36 crc kubenswrapper[5007]: E0218 20:31:36.911687 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:31:40 crc kubenswrapper[5007]: E0218 20:31:40.912115 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:31:46 crc kubenswrapper[5007]: E0218 20:31:46.913570 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:31:51 crc kubenswrapper[5007]: I0218 20:31:51.910829 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:31:51 crc kubenswrapper[5007]: E0218 20:31:51.913834 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:31:53 crc kubenswrapper[5007]: E0218 20:31:53.913168 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:31:55 crc kubenswrapper[5007]: I0218 20:31:55.506271 5007 scope.go:117] "RemoveContainer" containerID="a8b9bec21f97d6d3a141196e79ff412fae0f1435e54123afb3da764e214a3f17" Feb 18 20:31:55 crc kubenswrapper[5007]: I0218 20:31:55.536316 5007 scope.go:117] "RemoveContainer" containerID="440e2fd565cae3e3aed97d0463ca3abd06c351e148bf95f89855824ac9594f31" Feb 18 20:31:55 crc kubenswrapper[5007]: I0218 20:31:55.570886 5007 scope.go:117] "RemoveContainer" containerID="4b896244dbb82a3f2800ea686af50f36c18169ae2925728b49f4dbbd680a6788" Feb 18 20:31:58 crc kubenswrapper[5007]: E0218 20:31:58.913620 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:32:05 crc kubenswrapper[5007]: I0218 20:32:05.929210 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:32:05 crc kubenswrapper[5007]: E0218 20:32:05.930281 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:32:05 crc kubenswrapper[5007]: E0218 20:32:05.931877 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:32:09 crc kubenswrapper[5007]: E0218 20:32:09.913838 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:32:16 crc kubenswrapper[5007]: E0218 20:32:16.914424 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:32:18 crc kubenswrapper[5007]: I0218 20:32:18.909808 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:32:18 crc kubenswrapper[5007]: E0218 20:32:18.910622 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:32:23 crc kubenswrapper[5007]: E0218 20:32:23.913825 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:32:29 crc kubenswrapper[5007]: E0218 20:32:29.723372 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:32:29 crc kubenswrapper[5007]: E0218 20:32:29.724276 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:32:29 crc kubenswrapper[5007]: E0218 20:32:29.725545 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:32:31 crc kubenswrapper[5007]: I0218 20:32:31.910100 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:32:31 crc kubenswrapper[5007]: E0218 20:32:31.910824 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:32:38 crc kubenswrapper[5007]: E0218 20:32:38.263726 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:32:38 crc kubenswrapper[5007]: E0218 20:32:38.264908 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:32:38 crc kubenswrapper[5007]: E0218 20:32:38.266620 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:32:43 crc kubenswrapper[5007]: E0218 20:32:43.912239 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:32:45 crc kubenswrapper[5007]: I0218 20:32:45.918869 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:32:45 crc kubenswrapper[5007]: E0218 20:32:45.919403 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:32:51 crc kubenswrapper[5007]: E0218 20:32:51.931904 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:32:58 crc kubenswrapper[5007]: I0218 20:32:58.910100 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:32:58 crc kubenswrapper[5007]: E0218 20:32:58.911668 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:32:58 crc kubenswrapper[5007]: E0218 20:32:58.912259 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:33:02 crc kubenswrapper[5007]: E0218 20:33:02.912836 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:33:10 crc kubenswrapper[5007]: I0218 20:33:10.909925 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:33:10 crc kubenswrapper[5007]: E0218 20:33:10.910934 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:33:13 crc kubenswrapper[5007]: E0218 20:33:13.913937 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:33:14 crc kubenswrapper[5007]: E0218 20:33:14.912940 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:33:22 crc kubenswrapper[5007]: I0218 20:33:22.909660 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:33:22 crc kubenswrapper[5007]: E0218 20:33:22.910744 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:33:25 crc kubenswrapper[5007]: E0218 20:33:25.927691 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:33:26 crc kubenswrapper[5007]: E0218 20:33:26.910739 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:33:33 crc kubenswrapper[5007]: I0218 20:33:33.909789 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:33:33 crc kubenswrapper[5007]: E0218 20:33:33.910897 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:33:37 crc kubenswrapper[5007]: E0218 20:33:37.913344 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:33:40 crc kubenswrapper[5007]: E0218 20:33:40.912115 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:33:45 crc kubenswrapper[5007]: I0218 20:33:45.917509 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:33:46 crc kubenswrapper[5007]: I0218 20:33:46.601069 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"334ca387600d48a4bb7b4099a276c8b049b44225453add1f38a3b6122349b363"} Feb 18 20:33:49 crc kubenswrapper[5007]: E0218 20:33:49.913220 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:33:54 crc kubenswrapper[5007]: E0218 20:33:54.912525 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:34:01 crc kubenswrapper[5007]: E0218 20:34:01.914101 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:34:05 crc kubenswrapper[5007]: E0218 20:34:05.924541 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:34:16 crc kubenswrapper[5007]: E0218 20:34:16.912833 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:34:16 crc kubenswrapper[5007]: E0218 20:34:16.912947 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:34:28 crc kubenswrapper[5007]: E0218 20:34:28.915887 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:34:31 crc kubenswrapper[5007]: E0218 20:34:31.924297 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:34:39 crc kubenswrapper[5007]: E0218 20:34:39.912482 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:34:45 crc kubenswrapper[5007]: E0218 20:34:45.922474 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:34:54 crc kubenswrapper[5007]: E0218 20:34:54.913664 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:34:59 crc kubenswrapper[5007]: E0218 20:34:59.915060 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:35:05 crc kubenswrapper[5007]: E0218 20:35:05.921661 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:35:10 crc kubenswrapper[5007]: I0218 20:35:10.911413 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:35:13 crc kubenswrapper[5007]: E0218 20:35:13.112719 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:35:13 crc kubenswrapper[5007]: E0218 20:35:13.113654 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:13 crc kubenswrapper[5007]: E0218 20:35:13.114937 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:35:17 crc kubenswrapper[5007]: E0218 20:35:17.912543 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:35:25 crc kubenswrapper[5007]: E0218 20:35:25.921206 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.288833 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gjmxc"] Feb 18 20:35:28 crc kubenswrapper[5007]: E0218 20:35:28.289499 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b0739c-53b5-4133-87d5-d9dd47f5131b" containerName="collect-profiles" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.289511 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b0739c-53b5-4133-87d5-d9dd47f5131b" containerName="collect-profiles" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.289713 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b0739c-53b5-4133-87d5-d9dd47f5131b" containerName="collect-profiles" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.291015 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.306599 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjmxc"] Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.423236 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e85c638-b405-4e6c-baeb-5955fc34cb6e-catalog-content\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.423326 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qx95\" (UniqueName: \"kubernetes.io/projected/9e85c638-b405-4e6c-baeb-5955fc34cb6e-kube-api-access-6qx95\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.423421 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e85c638-b405-4e6c-baeb-5955fc34cb6e-utilities\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.525960 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e85c638-b405-4e6c-baeb-5955fc34cb6e-utilities\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.526206 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e85c638-b405-4e6c-baeb-5955fc34cb6e-catalog-content\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.526459 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e85c638-b405-4e6c-baeb-5955fc34cb6e-utilities\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.526812 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e85c638-b405-4e6c-baeb-5955fc34cb6e-catalog-content\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.526944 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qx95\" (UniqueName: \"kubernetes.io/projected/9e85c638-b405-4e6c-baeb-5955fc34cb6e-kube-api-access-6qx95\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.568884 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qx95\" (UniqueName: \"kubernetes.io/projected/9e85c638-b405-4e6c-baeb-5955fc34cb6e-kube-api-access-6qx95\") pod \"redhat-operators-gjmxc\" (UID: \"9e85c638-b405-4e6c-baeb-5955fc34cb6e\") " pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:28 crc kubenswrapper[5007]: I0218 20:35:28.622609 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjmxc" Feb 18 20:35:29 crc kubenswrapper[5007]: I0218 20:35:29.182016 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjmxc"] Feb 18 20:35:29 crc kubenswrapper[5007]: I0218 20:35:29.758558 5007 generic.go:334] "Generic (PLEG): container finished" podID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" containerID="d56758707ebd325b058fc00f1107f2bbbf5b682f11d5a6c9c528fc802ae477eb" exitCode=0 Feb 18 20:35:29 crc kubenswrapper[5007]: I0218 20:35:29.758638 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjmxc" event={"ID":"9e85c638-b405-4e6c-baeb-5955fc34cb6e","Type":"ContainerDied","Data":"d56758707ebd325b058fc00f1107f2bbbf5b682f11d5a6c9c528fc802ae477eb"} Feb 18 20:35:29 crc kubenswrapper[5007]: I0218 20:35:29.758866 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjmxc" event={"ID":"9e85c638-b405-4e6c-baeb-5955fc34cb6e","Type":"ContainerStarted","Data":"47f64bc27b211f0fec8f358eeef6caf20bffb5b18a8dec02f43c7a9174282390"} Feb 18 20:35:30 crc kubenswrapper[5007]: E0218 20:35:30.067539 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:35:30 crc kubenswrapper[5007]: E0218 20:35:30.067782 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:30 crc kubenswrapper[5007]: E0218 20:35:30.069062 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:35:30 crc kubenswrapper[5007]: E0218 20:35:30.844836 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:35:30 crc kubenswrapper[5007]: E0218 20:35:30.845105 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:30 crc kubenswrapper[5007]: E0218 20:35:30.846361 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:35:31 crc kubenswrapper[5007]: E0218 20:35:31.782729 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:35:38 crc kubenswrapper[5007]: E0218 20:35:38.913160 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:35:40 crc kubenswrapper[5007]: E0218 20:35:40.917704 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:35:45 crc kubenswrapper[5007]: E0218 20:35:45.877107 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:35:45 crc kubenswrapper[5007]: E0218 20:35:45.877761 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:45 crc kubenswrapper[5007]: E0218 20:35:45.879331 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:35:50 crc kubenswrapper[5007]: E0218 20:35:50.914262 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:35:54 crc kubenswrapper[5007]: E0218 20:35:54.913150 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:36:00 crc kubenswrapper[5007]: E0218 20:36:00.913386 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:36:02 crc kubenswrapper[5007]: E0218 20:36:02.912049 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:36:08 crc kubenswrapper[5007]: E0218 20:36:08.912036 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:36:15 crc kubenswrapper[5007]: E0218 20:36:15.465480 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:36:15 crc kubenswrapper[5007]: E0218 20:36:15.466411 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:15 crc kubenswrapper[5007]: E0218 20:36:15.467905 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:36:15 crc kubenswrapper[5007]: I0218 20:36:15.663805 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:36:15 crc kubenswrapper[5007]: I0218 20:36:15.663870 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:36:16 crc kubenswrapper[5007]: E0218 20:36:16.912350 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:36:19 crc kubenswrapper[5007]: E0218 20:36:19.913345 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:36:25 crc kubenswrapper[5007]: E0218 20:36:25.926128 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:36:27 crc kubenswrapper[5007]: E0218 20:36:27.911702 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:36:34 crc kubenswrapper[5007]: E0218 20:36:34.911913 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:36:38 crc kubenswrapper[5007]: E0218 20:36:38.913892 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:36:40 crc kubenswrapper[5007]: E0218 20:36:40.911273 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:36:45 crc kubenswrapper[5007]: I0218 20:36:45.663870 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:36:45 crc kubenswrapper[5007]: I0218 20:36:45.664646 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:36:49 crc kubenswrapper[5007]: E0218 20:36:49.913593 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:36:53 crc kubenswrapper[5007]: E0218 20:36:53.911737 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:36:54 crc kubenswrapper[5007]: E0218 20:36:54.913124 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.686580 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bx7n9"] Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.693049 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.706028 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bx7n9"] Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.851037 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t742h\" (UniqueName: \"kubernetes.io/projected/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-kube-api-access-t742h\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.851119 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-catalog-content\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.851257 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-utilities\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.953366 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-utilities\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.953636 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t742h\" (UniqueName: \"kubernetes.io/projected/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-kube-api-access-t742h\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.953704 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-catalog-content\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.954092 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-utilities\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.954296 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-catalog-content\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:55 crc kubenswrapper[5007]: I0218 20:36:55.982726 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t742h\" (UniqueName: \"kubernetes.io/projected/527ad024-e9b5-4aaf-96cf-d8f86ac2e70d-kube-api-access-t742h\") pod \"redhat-marketplace-bx7n9\" (UID: \"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d\") " pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:56 crc kubenswrapper[5007]: I0218 20:36:56.030932 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bx7n9" Feb 18 20:36:56 crc kubenswrapper[5007]: I0218 20:36:56.498129 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bx7n9"] Feb 18 20:36:56 crc kubenswrapper[5007]: I0218 20:36:56.816676 5007 generic.go:334] "Generic (PLEG): container finished" podID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" containerID="b761d5d8630e1fc721210f248b1c03b5a01faee36e4ffb0923d640250917071b" exitCode=0 Feb 18 20:36:56 crc kubenswrapper[5007]: I0218 20:36:56.816738 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bx7n9" event={"ID":"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d","Type":"ContainerDied","Data":"b761d5d8630e1fc721210f248b1c03b5a01faee36e4ffb0923d640250917071b"} Feb 18 20:36:56 crc kubenswrapper[5007]: I0218 20:36:56.816962 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bx7n9" event={"ID":"527ad024-e9b5-4aaf-96cf-d8f86ac2e70d","Type":"ContainerStarted","Data":"c1c072daa2dab9b0a25155a8f1947244e9ac6664f2623476259adc7891581ff4"} Feb 18 20:36:59 crc kubenswrapper[5007]: E0218 20:36:59.115361 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:36:59 crc kubenswrapper[5007]: E0218 20:36:59.116153 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:59 crc kubenswrapper[5007]: E0218 20:36:59.117407 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:36:59 crc kubenswrapper[5007]: E0218 20:36:59.847288 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:37:01 crc kubenswrapper[5007]: E0218 20:37:01.911571 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:37:04 crc kubenswrapper[5007]: E0218 20:37:04.911880 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:37:07 crc kubenswrapper[5007]: E0218 20:37:07.761641 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:37:07 crc kubenswrapper[5007]: E0218 20:37:07.762417 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:07 crc kubenswrapper[5007]: E0218 20:37:07.763833 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:37:12 crc kubenswrapper[5007]: E0218 20:37:12.636740 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:37:12 crc kubenswrapper[5007]: E0218 20:37:12.637525 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:12 crc kubenswrapper[5007]: E0218 20:37:12.638733 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:37:13 crc kubenswrapper[5007]: E0218 20:37:13.912935 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:37:15 crc kubenswrapper[5007]: I0218 20:37:15.663826 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:37:15 crc kubenswrapper[5007]: I0218 20:37:15.664232 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:37:15 crc kubenswrapper[5007]: I0218 20:37:15.664336 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:37:15 crc kubenswrapper[5007]: I0218 20:37:15.666019 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"334ca387600d48a4bb7b4099a276c8b049b44225453add1f38a3b6122349b363"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:37:15 crc kubenswrapper[5007]: I0218 20:37:15.666204 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://334ca387600d48a4bb7b4099a276c8b049b44225453add1f38a3b6122349b363" gracePeriod=600 Feb 18 20:37:16 crc kubenswrapper[5007]: I0218 20:37:16.007842 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="334ca387600d48a4bb7b4099a276c8b049b44225453add1f38a3b6122349b363" exitCode=0 Feb 18 20:37:16 crc kubenswrapper[5007]: I0218 20:37:16.008054 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"334ca387600d48a4bb7b4099a276c8b049b44225453add1f38a3b6122349b363"} Feb 18 20:37:16 crc kubenswrapper[5007]: I0218 20:37:16.008201 5007 scope.go:117] "RemoveContainer" containerID="185895c719655649b2a19e2e338ad9c1eef5fbb2a8b28a3dfc97d41afc0722f4" Feb 18 20:37:17 crc kubenswrapper[5007]: I0218 20:37:17.024363 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e"} Feb 18 20:37:17 crc kubenswrapper[5007]: E0218 20:37:17.912029 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:37:19 crc kubenswrapper[5007]: E0218 20:37:19.926113 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:37:23 crc kubenswrapper[5007]: E0218 20:37:23.912999 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:37:28 crc kubenswrapper[5007]: E0218 20:37:28.913394 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:37:28 crc kubenswrapper[5007]: E0218 20:37:28.913581 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:37:30 crc kubenswrapper[5007]: E0218 20:37:30.914934 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:37:38 crc kubenswrapper[5007]: E0218 20:37:38.293272 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:37:38 crc kubenswrapper[5007]: E0218 20:37:38.293981 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:38 crc kubenswrapper[5007]: E0218 20:37:38.295731 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:37:39 crc kubenswrapper[5007]: E0218 20:37:39.912606 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:37:42 crc kubenswrapper[5007]: E0218 20:37:42.912273 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:37:42 crc kubenswrapper[5007]: E0218 20:37:42.912369 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:37:47 crc kubenswrapper[5007]: E0218 20:37:47.448960 5007 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.132:35010->38.102.83.132:41997: write tcp 38.102.83.132:35010->38.102.83.132:41997: write: broken pipe Feb 18 20:37:52 crc kubenswrapper[5007]: E0218 20:37:52.910893 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:37:54 crc kubenswrapper[5007]: E0218 20:37:54.913337 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:37:56 crc kubenswrapper[5007]: E0218 20:37:56.913390 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:37:57 crc kubenswrapper[5007]: E0218 20:37:57.914287 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:38:07 crc kubenswrapper[5007]: E0218 20:38:07.913198 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:38:08 crc kubenswrapper[5007]: E0218 20:38:08.913113 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:38:08 crc kubenswrapper[5007]: E0218 20:38:08.913307 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:38:12 crc kubenswrapper[5007]: E0218 20:38:12.913881 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:38:19 crc kubenswrapper[5007]: E0218 20:38:19.912774 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:38:20 crc kubenswrapper[5007]: E0218 20:38:20.910978 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:38:21 crc kubenswrapper[5007]: E0218 20:38:21.372717 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:38:21 crc kubenswrapper[5007]: E0218 20:38:21.372880 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:21 crc kubenswrapper[5007]: E0218 20:38:21.374114 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:38:27 crc kubenswrapper[5007]: E0218 20:38:27.913153 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:38:31 crc kubenswrapper[5007]: E0218 20:38:31.912259 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:38:35 crc kubenswrapper[5007]: E0218 20:38:35.918507 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:38:38 crc kubenswrapper[5007]: E0218 20:38:38.145478 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:38:38 crc kubenswrapper[5007]: E0218 20:38:38.146262 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:38 crc kubenswrapper[5007]: E0218 20:38:38.147698 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:38:40 crc kubenswrapper[5007]: E0218 20:38:40.913801 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:38:43 crc kubenswrapper[5007]: E0218 20:38:43.915184 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:38:49 crc kubenswrapper[5007]: E0218 20:38:49.915697 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:38:49 crc kubenswrapper[5007]: E0218 20:38:49.939256 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:38:55 crc kubenswrapper[5007]: E0218 20:38:55.925531 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:38:57 crc kubenswrapper[5007]: E0218 20:38:57.913054 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:39:00 crc kubenswrapper[5007]: E0218 20:39:00.913170 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:39:02 crc kubenswrapper[5007]: E0218 20:39:02.912394 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:39:08 crc kubenswrapper[5007]: E0218 20:39:08.912117 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:39:09 crc kubenswrapper[5007]: E0218 20:39:09.913290 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:39:14 crc kubenswrapper[5007]: E0218 20:39:14.913279 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:39:15 crc kubenswrapper[5007]: E0218 20:39:15.916844 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:39:21 crc kubenswrapper[5007]: E0218 20:39:21.913415 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:39:23 crc kubenswrapper[5007]: E0218 20:39:23.912638 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:39:27 crc kubenswrapper[5007]: E0218 20:39:27.913086 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:39:28 crc kubenswrapper[5007]: E0218 20:39:28.911212 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:39:35 crc kubenswrapper[5007]: E0218 20:39:35.927258 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:39:36 crc kubenswrapper[5007]: E0218 20:39:36.911748 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:39:40 crc kubenswrapper[5007]: E0218 20:39:40.929559 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:39:40 crc kubenswrapper[5007]: E0218 20:39:40.930510 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:39:45 crc kubenswrapper[5007]: I0218 20:39:45.663483 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:39:45 crc kubenswrapper[5007]: I0218 20:39:45.664046 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:39:46 crc kubenswrapper[5007]: E0218 20:39:46.912318 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:39:49 crc kubenswrapper[5007]: E0218 20:39:49.508617 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:39:49 crc kubenswrapper[5007]: E0218 20:39:49.509131 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:49 crc kubenswrapper[5007]: E0218 20:39:49.510775 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:39:51 crc kubenswrapper[5007]: E0218 20:39:51.912545 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:39:52 crc kubenswrapper[5007]: E0218 20:39:52.911324 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:39:59 crc kubenswrapper[5007]: E0218 20:39:59.911689 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:40:03 crc kubenswrapper[5007]: E0218 20:40:03.912987 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:40:04 crc kubenswrapper[5007]: E0218 20:40:04.911249 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:40:07 crc kubenswrapper[5007]: E0218 20:40:07.913342 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:40:14 crc kubenswrapper[5007]: E0218 20:40:14.913042 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:40:15 crc kubenswrapper[5007]: I0218 20:40:15.664297 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:40:15 crc kubenswrapper[5007]: I0218 20:40:15.664384 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:40:15 crc kubenswrapper[5007]: I0218 20:40:15.924652 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:40:16 crc kubenswrapper[5007]: E0218 20:40:16.536005 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:40:16 crc kubenswrapper[5007]: E0218 20:40:16.536257 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:40:16 crc kubenswrapper[5007]: E0218 20:40:16.537535 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:40:16 crc kubenswrapper[5007]: E0218 20:40:16.911892 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:40:21 crc kubenswrapper[5007]: E0218 20:40:21.912233 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:40:28 crc kubenswrapper[5007]: E0218 20:40:28.912715 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:40:30 crc kubenswrapper[5007]: E0218 20:40:30.914131 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:40:30 crc kubenswrapper[5007]: E0218 20:40:30.914136 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:40:35 crc kubenswrapper[5007]: E0218 20:40:35.930339 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:40:40 crc kubenswrapper[5007]: E0218 20:40:40.978183 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:40:40 crc kubenswrapper[5007]: E0218 20:40:40.979165 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:40:40 crc kubenswrapper[5007]: E0218 20:40:40.980562 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:40:42 crc kubenswrapper[5007]: E0218 20:40:42.912998 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:40:45 crc kubenswrapper[5007]: I0218 20:40:45.663972 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:40:45 crc kubenswrapper[5007]: I0218 20:40:45.664489 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:40:45 crc kubenswrapper[5007]: I0218 20:40:45.664552 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:40:45 crc kubenswrapper[5007]: I0218 20:40:45.665574 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:40:45 crc kubenswrapper[5007]: I0218 20:40:45.665652 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" gracePeriod=600 Feb 18 20:40:45 crc kubenswrapper[5007]: E0218 20:40:45.791972 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:40:45 crc kubenswrapper[5007]: E0218 20:40:45.920805 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:40:46 crc kubenswrapper[5007]: I0218 20:40:46.474057 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" exitCode=0 Feb 18 20:40:46 crc kubenswrapper[5007]: I0218 20:40:46.474109 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e"} Feb 18 20:40:46 crc kubenswrapper[5007]: I0218 20:40:46.474423 5007 scope.go:117] "RemoveContainer" containerID="334ca387600d48a4bb7b4099a276c8b049b44225453add1f38a3b6122349b363" Feb 18 20:40:46 crc kubenswrapper[5007]: I0218 20:40:46.475410 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:40:46 crc kubenswrapper[5007]: E0218 20:40:46.476059 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:40:48 crc kubenswrapper[5007]: E0218 20:40:48.916178 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:40:54 crc kubenswrapper[5007]: E0218 20:40:54.912382 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:40:55 crc kubenswrapper[5007]: E0218 20:40:55.918309 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:40:58 crc kubenswrapper[5007]: I0218 20:40:58.910507 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:40:58 crc kubenswrapper[5007]: E0218 20:40:58.911261 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:40:58 crc kubenswrapper[5007]: E0218 20:40:58.914356 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:41:01 crc kubenswrapper[5007]: E0218 20:41:01.913224 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:41:09 crc kubenswrapper[5007]: E0218 20:41:09.912966 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:41:09 crc kubenswrapper[5007]: E0218 20:41:09.912975 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:41:11 crc kubenswrapper[5007]: I0218 20:41:11.911407 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:41:11 crc kubenswrapper[5007]: E0218 20:41:11.912230 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:41:12 crc kubenswrapper[5007]: E0218 20:41:12.911858 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:41:16 crc kubenswrapper[5007]: E0218 20:41:16.912608 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:41:20 crc kubenswrapper[5007]: E0218 20:41:20.913352 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:41:24 crc kubenswrapper[5007]: E0218 20:41:24.914130 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:41:25 crc kubenswrapper[5007]: I0218 20:41:25.928492 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:41:25 crc kubenswrapper[5007]: E0218 20:41:25.929350 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:41:27 crc kubenswrapper[5007]: E0218 20:41:27.913090 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:41:32 crc kubenswrapper[5007]: E0218 20:41:32.679187 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:41:32 crc kubenswrapper[5007]: E0218 20:41:32.681911 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:41:32 crc kubenswrapper[5007]: E0218 20:41:32.683348 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:41:35 crc kubenswrapper[5007]: E0218 20:41:35.919081 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:41:36 crc kubenswrapper[5007]: E0218 20:41:36.914151 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:41:37 crc kubenswrapper[5007]: I0218 20:41:37.911130 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:41:37 crc kubenswrapper[5007]: E0218 20:41:37.912199 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:41:38 crc kubenswrapper[5007]: E0218 20:41:38.913972 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:41:44 crc kubenswrapper[5007]: E0218 20:41:44.911025 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:41:48 crc kubenswrapper[5007]: I0218 20:41:48.910011 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:41:48 crc kubenswrapper[5007]: E0218 20:41:48.910744 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:41:48 crc kubenswrapper[5007]: E0218 20:41:48.912360 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:41:49 crc kubenswrapper[5007]: E0218 20:41:49.912684 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:41:51 crc kubenswrapper[5007]: E0218 20:41:51.913889 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:41:58 crc kubenswrapper[5007]: E0218 20:41:58.913581 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:41:59 crc kubenswrapper[5007]: I0218 20:41:59.911035 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:41:59 crc kubenswrapper[5007]: E0218 20:41:59.911850 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:41:59 crc kubenswrapper[5007]: E0218 20:41:59.913547 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:42:04 crc kubenswrapper[5007]: E0218 20:42:04.910862 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:42:06 crc kubenswrapper[5007]: E0218 20:42:06.910723 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:42:09 crc kubenswrapper[5007]: E0218 20:42:09.917867 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:42:11 crc kubenswrapper[5007]: I0218 20:42:11.922745 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:42:11 crc kubenswrapper[5007]: E0218 20:42:11.923529 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:42:11 crc kubenswrapper[5007]: E0218 20:42:11.924949 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:42:15 crc kubenswrapper[5007]: E0218 20:42:15.924288 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:42:20 crc kubenswrapper[5007]: E0218 20:42:20.911980 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:42:20 crc kubenswrapper[5007]: E0218 20:42:20.912120 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:42:23 crc kubenswrapper[5007]: I0218 20:42:23.912057 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:42:23 crc kubenswrapper[5007]: E0218 20:42:23.913415 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:42:23 crc kubenswrapper[5007]: E0218 20:42:23.916957 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:42:27 crc kubenswrapper[5007]: E0218 20:42:27.912816 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:42:31 crc kubenswrapper[5007]: E0218 20:42:31.914318 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:42:34 crc kubenswrapper[5007]: E0218 20:42:34.912039 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:42:38 crc kubenswrapper[5007]: E0218 20:42:38.234280 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:42:38 crc kubenswrapper[5007]: E0218 20:42:38.234900 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:42:38 crc kubenswrapper[5007]: E0218 20:42:38.236499 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:42:38 crc kubenswrapper[5007]: I0218 20:42:38.910408 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:42:38 crc kubenswrapper[5007]: E0218 20:42:38.911064 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:42:38 crc kubenswrapper[5007]: E0218 20:42:38.912235 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:42:44 crc kubenswrapper[5007]: E0218 20:42:44.913563 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:42:49 crc kubenswrapper[5007]: E0218 20:42:49.912735 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:42:49 crc kubenswrapper[5007]: E0218 20:42:49.912822 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:42:50 crc kubenswrapper[5007]: I0218 20:42:50.913087 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:42:50 crc kubenswrapper[5007]: E0218 20:42:50.913822 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:42:53 crc kubenswrapper[5007]: E0218 20:42:53.913231 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:42:55 crc kubenswrapper[5007]: E0218 20:42:55.925981 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:43:02 crc kubenswrapper[5007]: E0218 20:43:02.912685 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:43:03 crc kubenswrapper[5007]: E0218 20:43:03.913953 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:43:04 crc kubenswrapper[5007]: I0218 20:43:04.911646 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:43:04 crc kubenswrapper[5007]: E0218 20:43:04.911967 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:43:06 crc kubenswrapper[5007]: E0218 20:43:06.913019 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:43:08 crc kubenswrapper[5007]: E0218 20:43:08.911635 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:43:16 crc kubenswrapper[5007]: I0218 20:43:16.909742 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:43:16 crc kubenswrapper[5007]: E0218 20:43:16.910470 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:43:17 crc kubenswrapper[5007]: E0218 20:43:17.913292 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:43:17 crc kubenswrapper[5007]: E0218 20:43:17.913697 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:43:17 crc kubenswrapper[5007]: E0218 20:43:17.914988 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:43:19 crc kubenswrapper[5007]: E0218 20:43:19.913949 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:43:28 crc kubenswrapper[5007]: I0218 20:43:28.910879 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:43:28 crc kubenswrapper[5007]: E0218 20:43:28.912046 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:43:29 crc kubenswrapper[5007]: E0218 20:43:29.911709 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:43:30 crc kubenswrapper[5007]: E0218 20:43:30.911383 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:43:30 crc kubenswrapper[5007]: E0218 20:43:30.911382 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:43:30 crc kubenswrapper[5007]: E0218 20:43:30.912538 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:43:41 crc kubenswrapper[5007]: I0218 20:43:41.910409 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:43:41 crc kubenswrapper[5007]: E0218 20:43:41.913137 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:43:42 crc kubenswrapper[5007]: E0218 20:43:42.911931 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:43:43 crc kubenswrapper[5007]: E0218 20:43:43.913188 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:43:44 crc kubenswrapper[5007]: E0218 20:43:44.913992 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:43:45 crc kubenswrapper[5007]: E0218 20:43:45.912039 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:43:53 crc kubenswrapper[5007]: I0218 20:43:53.911051 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:43:53 crc kubenswrapper[5007]: E0218 20:43:53.912135 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:43:55 crc kubenswrapper[5007]: E0218 20:43:55.926640 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:43:58 crc kubenswrapper[5007]: E0218 20:43:58.911955 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:43:58 crc kubenswrapper[5007]: E0218 20:43:58.911991 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:43:58 crc kubenswrapper[5007]: E0218 20:43:58.912113 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:44:04 crc kubenswrapper[5007]: I0218 20:44:04.910038 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:44:04 crc kubenswrapper[5007]: E0218 20:44:04.911008 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:44:07 crc kubenswrapper[5007]: E0218 20:44:07.912875 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:44:12 crc kubenswrapper[5007]: E0218 20:44:12.911423 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:44:12 crc kubenswrapper[5007]: E0218 20:44:12.912424 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:44:13 crc kubenswrapper[5007]: E0218 20:44:13.912676 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:44:16 crc kubenswrapper[5007]: I0218 20:44:16.909980 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:44:16 crc kubenswrapper[5007]: E0218 20:44:16.910852 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:44:22 crc kubenswrapper[5007]: E0218 20:44:22.915564 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:44:24 crc kubenswrapper[5007]: E0218 20:44:24.128705 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:44:25 crc kubenswrapper[5007]: E0218 20:44:25.925058 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:44:27 crc kubenswrapper[5007]: E0218 20:44:27.911490 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:44:31 crc kubenswrapper[5007]: I0218 20:44:31.910421 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:44:31 crc kubenswrapper[5007]: E0218 20:44:31.911198 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:44:33 crc kubenswrapper[5007]: E0218 20:44:33.912206 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:44:34 crc kubenswrapper[5007]: E0218 20:44:34.911749 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:44:39 crc kubenswrapper[5007]: E0218 20:44:39.911515 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:44:42 crc kubenswrapper[5007]: E0218 20:44:42.913544 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:44:44 crc kubenswrapper[5007]: E0218 20:44:44.912822 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:44:46 crc kubenswrapper[5007]: I0218 20:44:46.910462 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:44:46 crc kubenswrapper[5007]: E0218 20:44:46.911356 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:44:49 crc kubenswrapper[5007]: E0218 20:44:49.912308 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:44:53 crc kubenswrapper[5007]: E0218 20:44:53.915638 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:44:57 crc kubenswrapper[5007]: E0218 20:44:57.912986 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:44:59 crc kubenswrapper[5007]: I0218 20:44:59.914578 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:44:59 crc kubenswrapper[5007]: E0218 20:44:59.915511 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:44:59 crc kubenswrapper[5007]: E0218 20:44:59.919052 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.211752 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp"] Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.214554 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.217126 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.217509 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.238292 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp"] Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.391739 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a5c3587-c990-4b45-b0e9-648ec99cd640-secret-volume\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.391822 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntm6f\" (UniqueName: \"kubernetes.io/projected/3a5c3587-c990-4b45-b0e9-648ec99cd640-kube-api-access-ntm6f\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.391886 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a5c3587-c990-4b45-b0e9-648ec99cd640-config-volume\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.493992 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a5c3587-c990-4b45-b0e9-648ec99cd640-secret-volume\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.494098 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntm6f\" (UniqueName: \"kubernetes.io/projected/3a5c3587-c990-4b45-b0e9-648ec99cd640-kube-api-access-ntm6f\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.494167 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a5c3587-c990-4b45-b0e9-648ec99cd640-config-volume\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.494948 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a5c3587-c990-4b45-b0e9-648ec99cd640-config-volume\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.505709 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a5c3587-c990-4b45-b0e9-648ec99cd640-secret-volume\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.513147 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntm6f\" (UniqueName: \"kubernetes.io/projected/3a5c3587-c990-4b45-b0e9-648ec99cd640-kube-api-access-ntm6f\") pod \"collect-profiles-29524125-qqzqp\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:00 crc kubenswrapper[5007]: I0218 20:45:00.540885 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:01 crc kubenswrapper[5007]: I0218 20:45:01.090944 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp"] Feb 18 20:45:01 crc kubenswrapper[5007]: I0218 20:45:01.527986 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" event={"ID":"3a5c3587-c990-4b45-b0e9-648ec99cd640","Type":"ContainerStarted","Data":"ce191859b3ef3f5d8d389e21cfd9570ceb8aa90d67c35a8b3f2af35de77fa068"} Feb 18 20:45:01 crc kubenswrapper[5007]: I0218 20:45:01.528279 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" event={"ID":"3a5c3587-c990-4b45-b0e9-648ec99cd640","Type":"ContainerStarted","Data":"3267129b1c35e5e80736fdbd071ecb06b17c249e159a589439f5c953383b026d"} Feb 18 20:45:01 crc kubenswrapper[5007]: I0218 20:45:01.554394 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" podStartSLOduration=1.554371638 podStartE2EDuration="1.554371638s" podCreationTimestamp="2026-02-18 20:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:45:01.55338187 +0000 UTC m=+3986.335501394" watchObservedRunningTime="2026-02-18 20:45:01.554371638 +0000 UTC m=+3986.336491162" Feb 18 20:45:01 crc kubenswrapper[5007]: E0218 20:45:01.693983 5007 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a5c3587_c990_4b45_b0e9_648ec99cd640.slice/crio-conmon-ce191859b3ef3f5d8d389e21cfd9570ceb8aa90d67c35a8b3f2af35de77fa068.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a5c3587_c990_4b45_b0e9_648ec99cd640.slice/crio-ce191859b3ef3f5d8d389e21cfd9570ceb8aa90d67c35a8b3f2af35de77fa068.scope\": RecentStats: unable to find data in memory cache]" Feb 18 20:45:01 crc kubenswrapper[5007]: E0218 20:45:01.911542 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:45:02 crc kubenswrapper[5007]: I0218 20:45:02.543645 5007 generic.go:334] "Generic (PLEG): container finished" podID="3a5c3587-c990-4b45-b0e9-648ec99cd640" containerID="ce191859b3ef3f5d8d389e21cfd9570ceb8aa90d67c35a8b3f2af35de77fa068" exitCode=0 Feb 18 20:45:02 crc kubenswrapper[5007]: I0218 20:45:02.543715 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" event={"ID":"3a5c3587-c990-4b45-b0e9-648ec99cd640","Type":"ContainerDied","Data":"ce191859b3ef3f5d8d389e21cfd9570ceb8aa90d67c35a8b3f2af35de77fa068"} Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.064766 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.183283 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a5c3587-c990-4b45-b0e9-648ec99cd640-secret-volume\") pod \"3a5c3587-c990-4b45-b0e9-648ec99cd640\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.183515 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a5c3587-c990-4b45-b0e9-648ec99cd640-config-volume\") pod \"3a5c3587-c990-4b45-b0e9-648ec99cd640\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.183754 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntm6f\" (UniqueName: \"kubernetes.io/projected/3a5c3587-c990-4b45-b0e9-648ec99cd640-kube-api-access-ntm6f\") pod \"3a5c3587-c990-4b45-b0e9-648ec99cd640\" (UID: \"3a5c3587-c990-4b45-b0e9-648ec99cd640\") " Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.184343 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a5c3587-c990-4b45-b0e9-648ec99cd640-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a5c3587-c990-4b45-b0e9-648ec99cd640" (UID: "3a5c3587-c990-4b45-b0e9-648ec99cd640"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.184765 5007 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a5c3587-c990-4b45-b0e9-648ec99cd640-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.189246 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5c3587-c990-4b45-b0e9-648ec99cd640-kube-api-access-ntm6f" (OuterVolumeSpecName: "kube-api-access-ntm6f") pod "3a5c3587-c990-4b45-b0e9-648ec99cd640" (UID: "3a5c3587-c990-4b45-b0e9-648ec99cd640"). InnerVolumeSpecName "kube-api-access-ntm6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.195695 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5c3587-c990-4b45-b0e9-648ec99cd640-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a5c3587-c990-4b45-b0e9-648ec99cd640" (UID: "3a5c3587-c990-4b45-b0e9-648ec99cd640"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.286784 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntm6f\" (UniqueName: \"kubernetes.io/projected/3a5c3587-c990-4b45-b0e9-648ec99cd640-kube-api-access-ntm6f\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.286828 5007 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a5c3587-c990-4b45-b0e9-648ec99cd640-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.563150 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" event={"ID":"3a5c3587-c990-4b45-b0e9-648ec99cd640","Type":"ContainerDied","Data":"3267129b1c35e5e80736fdbd071ecb06b17c249e159a589439f5c953383b026d"} Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.563191 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3267129b1c35e5e80736fdbd071ecb06b17c249e159a589439f5c953383b026d" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.563199 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-qqzqp" Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.629084 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b"] Feb 18 20:45:04 crc kubenswrapper[5007]: I0218 20:45:04.642800 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-s9l7b"] Feb 18 20:45:05 crc kubenswrapper[5007]: I0218 20:45:05.936057 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a092a1a2-0a15-4f13-93b0-ec89f14b9474" path="/var/lib/kubelet/pods/a092a1a2-0a15-4f13-93b0-ec89f14b9474/volumes" Feb 18 20:45:07 crc kubenswrapper[5007]: E0218 20:45:07.914489 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:45:08 crc kubenswrapper[5007]: E0218 20:45:08.911290 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:45:12 crc kubenswrapper[5007]: I0218 20:45:12.910795 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:45:12 crc kubenswrapper[5007]: E0218 20:45:12.911706 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:45:13 crc kubenswrapper[5007]: E0218 20:45:13.912598 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:45:16 crc kubenswrapper[5007]: E0218 20:45:16.913351 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:45:22 crc kubenswrapper[5007]: E0218 20:45:22.912421 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:45:22 crc kubenswrapper[5007]: I0218 20:45:22.912676 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:45:23 crc kubenswrapper[5007]: I0218 20:45:23.915113 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:45:23 crc kubenswrapper[5007]: E0218 20:45:23.915893 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:45:24 crc kubenswrapper[5007]: E0218 20:45:24.287192 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:45:24 crc kubenswrapper[5007]: E0218 20:45:24.287485 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:45:24 crc kubenswrapper[5007]: E0218 20:45:24.288767 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:45:28 crc kubenswrapper[5007]: E0218 20:45:28.914081 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:45:29 crc kubenswrapper[5007]: E0218 20:45:29.912406 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:45:36 crc kubenswrapper[5007]: I0218 20:45:36.910909 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:45:36 crc kubenswrapper[5007]: E0218 20:45:36.912404 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:45:36 crc kubenswrapper[5007]: E0218 20:45:36.913394 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:45:37 crc kubenswrapper[5007]: E0218 20:45:37.912524 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:45:40 crc kubenswrapper[5007]: E0218 20:45:40.913336 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:45:42 crc kubenswrapper[5007]: E0218 20:45:42.499303 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:45:42 crc kubenswrapper[5007]: E0218 20:45:42.499641 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:45:42 crc kubenswrapper[5007]: E0218 20:45:42.500982 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:45:48 crc kubenswrapper[5007]: I0218 20:45:48.910255 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:45:48 crc kubenswrapper[5007]: E0218 20:45:48.913746 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:45:50 crc kubenswrapper[5007]: I0218 20:45:50.170365 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"b0965f2ca0ae536fc62c24bbf8e9c9d02aea298561488e380005cf45059ee586"} Feb 18 20:45:52 crc kubenswrapper[5007]: E0218 20:45:52.912231 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:45:54 crc kubenswrapper[5007]: E0218 20:45:54.912153 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:45:55 crc kubenswrapper[5007]: I0218 20:45:55.969808 5007 scope.go:117] "RemoveContainer" containerID="4d29d61ad07dc8aa7f1655225a7f93cf7d6f91ed57e04c1a414db27394d28e8e" Feb 18 20:45:56 crc kubenswrapper[5007]: E0218 20:45:56.913225 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:46:00 crc kubenswrapper[5007]: E0218 20:46:00.914072 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:46:07 crc kubenswrapper[5007]: E0218 20:46:07.913801 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:46:08 crc kubenswrapper[5007]: E0218 20:46:08.911361 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:46:11 crc kubenswrapper[5007]: E0218 20:46:11.913070 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:46:15 crc kubenswrapper[5007]: E0218 20:46:15.927892 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:46:22 crc kubenswrapper[5007]: E0218 20:46:22.912534 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:46:22 crc kubenswrapper[5007]: E0218 20:46:22.914967 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:46:25 crc kubenswrapper[5007]: E0218 20:46:25.921748 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:46:26 crc kubenswrapper[5007]: E0218 20:46:26.912789 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:46:37 crc kubenswrapper[5007]: E0218 20:46:37.913856 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:46:37 crc kubenswrapper[5007]: E0218 20:46:37.914256 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:46:37 crc kubenswrapper[5007]: E0218 20:46:37.914356 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:46:38 crc kubenswrapper[5007]: E0218 20:46:38.537759 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:46:38 crc kubenswrapper[5007]: E0218 20:46:38.537920 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:46:38 crc kubenswrapper[5007]: E0218 20:46:38.539139 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:46:49 crc kubenswrapper[5007]: E0218 20:46:49.913209 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:46:50 crc kubenswrapper[5007]: E0218 20:46:50.912794 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:46:50 crc kubenswrapper[5007]: E0218 20:46:50.913273 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:46:52 crc kubenswrapper[5007]: E0218 20:46:52.912311 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:47:02 crc kubenswrapper[5007]: E0218 20:47:02.912865 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:47:04 crc kubenswrapper[5007]: E0218 20:47:04.914214 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:47:05 crc kubenswrapper[5007]: E0218 20:47:05.926151 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:47:05 crc kubenswrapper[5007]: E0218 20:47:05.927850 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:47:15 crc kubenswrapper[5007]: E0218 20:47:15.925991 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:47:17 crc kubenswrapper[5007]: E0218 20:47:17.911963 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:47:20 crc kubenswrapper[5007]: E0218 20:47:20.913316 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:47:20 crc kubenswrapper[5007]: E0218 20:47:20.913342 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:47:28 crc kubenswrapper[5007]: E0218 20:47:28.911259 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:47:30 crc kubenswrapper[5007]: E0218 20:47:30.912675 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:47:34 crc kubenswrapper[5007]: E0218 20:47:34.913120 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:47:35 crc kubenswrapper[5007]: E0218 20:47:35.921333 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:47:43 crc kubenswrapper[5007]: E0218 20:47:43.913033 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:47:45 crc kubenswrapper[5007]: E0218 20:47:45.811858 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:47:45 crc kubenswrapper[5007]: E0218 20:47:45.812425 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:47:45 crc kubenswrapper[5007]: E0218 20:47:45.814140 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:47:46 crc kubenswrapper[5007]: E0218 20:47:46.912875 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:47:48 crc kubenswrapper[5007]: E0218 20:47:48.914922 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:47:54 crc kubenswrapper[5007]: E0218 20:47:54.913258 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:47:55 crc kubenswrapper[5007]: E0218 20:47:55.925689 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:47:58 crc kubenswrapper[5007]: E0218 20:47:58.922267 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:48:01 crc kubenswrapper[5007]: E0218 20:48:01.914122 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:48:05 crc kubenswrapper[5007]: E0218 20:48:05.926866 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:48:06 crc kubenswrapper[5007]: E0218 20:48:06.911565 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:48:11 crc kubenswrapper[5007]: E0218 20:48:11.913303 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:48:12 crc kubenswrapper[5007]: E0218 20:48:12.911710 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:48:15 crc kubenswrapper[5007]: I0218 20:48:15.664636 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:48:15 crc kubenswrapper[5007]: I0218 20:48:15.665184 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:48:16 crc kubenswrapper[5007]: E0218 20:48:16.911943 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:48:17 crc kubenswrapper[5007]: E0218 20:48:17.912410 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:48:23 crc kubenswrapper[5007]: E0218 20:48:23.914582 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:48:25 crc kubenswrapper[5007]: E0218 20:48:25.927711 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:48:28 crc kubenswrapper[5007]: E0218 20:48:28.913075 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:48:31 crc kubenswrapper[5007]: E0218 20:48:31.912262 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:48:34 crc kubenswrapper[5007]: E0218 20:48:34.911562 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:48:40 crc kubenswrapper[5007]: E0218 20:48:40.913510 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:48:41 crc kubenswrapper[5007]: E0218 20:48:41.911656 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:48:44 crc kubenswrapper[5007]: E0218 20:48:44.911884 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:48:45 crc kubenswrapper[5007]: I0218 20:48:45.663596 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:48:45 crc kubenswrapper[5007]: I0218 20:48:45.663650 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:48:45 crc kubenswrapper[5007]: E0218 20:48:45.918544 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:48:52 crc kubenswrapper[5007]: E0218 20:48:52.912579 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:48:56 crc kubenswrapper[5007]: E0218 20:48:56.913267 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:48:57 crc kubenswrapper[5007]: E0218 20:48:57.911595 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:48:59 crc kubenswrapper[5007]: E0218 20:48:59.911782 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:49:03 crc kubenswrapper[5007]: E0218 20:49:03.916364 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:49:10 crc kubenswrapper[5007]: E0218 20:49:10.912901 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:49:10 crc kubenswrapper[5007]: E0218 20:49:10.912972 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:49:14 crc kubenswrapper[5007]: E0218 20:49:14.767135 5007 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.132:48804->38.102.83.132:41997: write tcp 38.102.83.132:48804->38.102.83.132:41997: write: broken pipe Feb 18 20:49:14 crc kubenswrapper[5007]: E0218 20:49:14.912886 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:49:15 crc kubenswrapper[5007]: I0218 20:49:15.664133 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:49:15 crc kubenswrapper[5007]: I0218 20:49:15.664236 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:49:15 crc kubenswrapper[5007]: I0218 20:49:15.664281 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:49:15 crc kubenswrapper[5007]: I0218 20:49:15.665781 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0965f2ca0ae536fc62c24bbf8e9c9d02aea298561488e380005cf45059ee586"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:49:15 crc kubenswrapper[5007]: I0218 20:49:15.665853 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://b0965f2ca0ae536fc62c24bbf8e9c9d02aea298561488e380005cf45059ee586" gracePeriod=600 Feb 18 20:49:16 crc kubenswrapper[5007]: I0218 20:49:16.020521 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="b0965f2ca0ae536fc62c24bbf8e9c9d02aea298561488e380005cf45059ee586" exitCode=0 Feb 18 20:49:16 crc kubenswrapper[5007]: I0218 20:49:16.020940 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"b0965f2ca0ae536fc62c24bbf8e9c9d02aea298561488e380005cf45059ee586"} Feb 18 20:49:16 crc kubenswrapper[5007]: I0218 20:49:16.021238 5007 scope.go:117] "RemoveContainer" containerID="3696f39b9915fd565b0af26580da458b5036d284c47f006a530c124d6d6b504e" Feb 18 20:49:17 crc kubenswrapper[5007]: I0218 20:49:17.034610 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d"} Feb 18 20:49:18 crc kubenswrapper[5007]: E0218 20:49:18.912558 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:49:23 crc kubenswrapper[5007]: E0218 20:49:23.914102 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:49:25 crc kubenswrapper[5007]: E0218 20:49:25.932704 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:49:27 crc kubenswrapper[5007]: E0218 20:49:27.914924 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:49:29 crc kubenswrapper[5007]: E0218 20:49:29.911814 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:49:36 crc kubenswrapper[5007]: E0218 20:49:36.913510 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:49:38 crc kubenswrapper[5007]: E0218 20:49:38.912133 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:49:41 crc kubenswrapper[5007]: E0218 20:49:41.912869 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:49:42 crc kubenswrapper[5007]: E0218 20:49:42.912011 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:49:50 crc kubenswrapper[5007]: E0218 20:49:50.914514 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:49:52 crc kubenswrapper[5007]: E0218 20:49:52.911641 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:49:53 crc kubenswrapper[5007]: E0218 20:49:53.912210 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:49:56 crc kubenswrapper[5007]: E0218 20:49:56.912845 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:50:05 crc kubenswrapper[5007]: E0218 20:50:05.918847 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:50:07 crc kubenswrapper[5007]: E0218 20:50:07.911691 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:50:08 crc kubenswrapper[5007]: E0218 20:50:08.912804 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:50:10 crc kubenswrapper[5007]: E0218 20:50:10.912508 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:50:20 crc kubenswrapper[5007]: E0218 20:50:20.913107 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:50:21 crc kubenswrapper[5007]: E0218 20:50:21.911216 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:50:22 crc kubenswrapper[5007]: E0218 20:50:22.911257 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:50:23 crc kubenswrapper[5007]: E0218 20:50:23.912628 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:50:31 crc kubenswrapper[5007]: I0218 20:50:31.913402 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:50:32 crc kubenswrapper[5007]: E0218 20:50:32.627318 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:50:32 crc kubenswrapper[5007]: E0218 20:50:32.627837 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:50:32 crc kubenswrapper[5007]: E0218 20:50:32.628995 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:50:34 crc kubenswrapper[5007]: E0218 20:50:34.912590 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:50:35 crc kubenswrapper[5007]: E0218 20:50:35.929017 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:50:37 crc kubenswrapper[5007]: E0218 20:50:37.912077 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:50:43 crc kubenswrapper[5007]: E0218 20:50:43.914360 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:50:47 crc kubenswrapper[5007]: E0218 20:50:47.912535 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:50:48 crc kubenswrapper[5007]: E0218 20:50:48.913404 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:50:51 crc kubenswrapper[5007]: E0218 20:50:51.073538 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:50:51 crc kubenswrapper[5007]: E0218 20:50:51.074244 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:50:51 crc kubenswrapper[5007]: E0218 20:50:51.075821 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:50:57 crc kubenswrapper[5007]: E0218 20:50:57.913962 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:50:59 crc kubenswrapper[5007]: E0218 20:50:59.911681 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:51:00 crc kubenswrapper[5007]: E0218 20:51:00.912177 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:51:05 crc kubenswrapper[5007]: E0218 20:51:05.918578 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:51:12 crc kubenswrapper[5007]: E0218 20:51:12.914689 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:51:14 crc kubenswrapper[5007]: E0218 20:51:14.913652 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:51:15 crc kubenswrapper[5007]: E0218 20:51:15.925798 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:51:18 crc kubenswrapper[5007]: E0218 20:51:18.913611 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:51:24 crc kubenswrapper[5007]: E0218 20:51:24.914533 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:51:27 crc kubenswrapper[5007]: E0218 20:51:27.914004 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:51:29 crc kubenswrapper[5007]: E0218 20:51:29.912568 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:51:30 crc kubenswrapper[5007]: E0218 20:51:30.911801 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:51:38 crc kubenswrapper[5007]: E0218 20:51:38.915137 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:51:41 crc kubenswrapper[5007]: E0218 20:51:41.913081 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:51:44 crc kubenswrapper[5007]: E0218 20:51:44.005802 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:51:44 crc kubenswrapper[5007]: E0218 20:51:44.006587 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:51:44 crc kubenswrapper[5007]: E0218 20:51:44.007988 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:51:44 crc kubenswrapper[5007]: E0218 20:51:44.912716 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:51:45 crc kubenswrapper[5007]: I0218 20:51:45.664021 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:51:45 crc kubenswrapper[5007]: I0218 20:51:45.664323 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:51:50 crc kubenswrapper[5007]: E0218 20:51:50.912159 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:51:55 crc kubenswrapper[5007]: E0218 20:51:55.919097 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:51:55 crc kubenswrapper[5007]: E0218 20:51:55.919163 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:51:56 crc kubenswrapper[5007]: E0218 20:51:56.913013 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:52:01 crc kubenswrapper[5007]: E0218 20:52:01.914408 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:52:06 crc kubenswrapper[5007]: E0218 20:52:06.914253 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:52:07 crc kubenswrapper[5007]: E0218 20:52:07.911197 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:52:09 crc kubenswrapper[5007]: E0218 20:52:09.917115 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:52:14 crc kubenswrapper[5007]: E0218 20:52:14.913754 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:52:15 crc kubenswrapper[5007]: I0218 20:52:15.664561 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:52:15 crc kubenswrapper[5007]: I0218 20:52:15.665005 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:52:17 crc kubenswrapper[5007]: E0218 20:52:17.912719 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:52:21 crc kubenswrapper[5007]: E0218 20:52:21.913301 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:52:21 crc kubenswrapper[5007]: E0218 20:52:21.913785 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:52:25 crc kubenswrapper[5007]: E0218 20:52:25.921123 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:52:29 crc kubenswrapper[5007]: E0218 20:52:29.913116 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:52:32 crc kubenswrapper[5007]: E0218 20:52:32.912707 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:52:36 crc kubenswrapper[5007]: E0218 20:52:36.913157 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:52:40 crc kubenswrapper[5007]: E0218 20:52:40.912950 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:52:40 crc kubenswrapper[5007]: E0218 20:52:40.912966 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:52:45 crc kubenswrapper[5007]: I0218 20:52:45.663668 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:52:45 crc kubenswrapper[5007]: I0218 20:52:45.664556 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:52:45 crc kubenswrapper[5007]: I0218 20:52:45.664635 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 20:52:45 crc kubenswrapper[5007]: I0218 20:52:45.665933 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:52:45 crc kubenswrapper[5007]: I0218 20:52:45.666022 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" gracePeriod=600 Feb 18 20:52:45 crc kubenswrapper[5007]: E0218 20:52:45.800621 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:52:46 crc kubenswrapper[5007]: I0218 20:52:46.476040 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" exitCode=0 Feb 18 20:52:46 crc kubenswrapper[5007]: I0218 20:52:46.476112 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d"} Feb 18 20:52:46 crc kubenswrapper[5007]: I0218 20:52:46.476172 5007 scope.go:117] "RemoveContainer" containerID="b0965f2ca0ae536fc62c24bbf8e9c9d02aea298561488e380005cf45059ee586" Feb 18 20:52:46 crc kubenswrapper[5007]: I0218 20:52:46.477888 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:52:46 crc kubenswrapper[5007]: E0218 20:52:46.478617 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:52:47 crc kubenswrapper[5007]: E0218 20:52:47.912101 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:52:51 crc kubenswrapper[5007]: E0218 20:52:51.118231 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:52:51 crc kubenswrapper[5007]: E0218 20:52:51.118782 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:52:51 crc kubenswrapper[5007]: E0218 20:52:51.119969 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:52:52 crc kubenswrapper[5007]: E0218 20:52:52.912578 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:52:53 crc kubenswrapper[5007]: E0218 20:52:53.914083 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:52:57 crc kubenswrapper[5007]: I0218 20:52:57.910924 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:52:57 crc kubenswrapper[5007]: E0218 20:52:57.912040 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:53:00 crc kubenswrapper[5007]: E0218 20:53:00.915348 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:53:03 crc kubenswrapper[5007]: E0218 20:53:03.912494 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:53:06 crc kubenswrapper[5007]: E0218 20:53:06.912241 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:53:07 crc kubenswrapper[5007]: E0218 20:53:07.911488 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:53:09 crc kubenswrapper[5007]: I0218 20:53:09.910395 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:53:09 crc kubenswrapper[5007]: E0218 20:53:09.911174 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:53:14 crc kubenswrapper[5007]: E0218 20:53:14.912137 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:53:16 crc kubenswrapper[5007]: E0218 20:53:16.913371 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:53:20 crc kubenswrapper[5007]: E0218 20:53:20.913888 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:53:21 crc kubenswrapper[5007]: E0218 20:53:21.911881 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:53:24 crc kubenswrapper[5007]: I0218 20:53:24.910063 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:53:24 crc kubenswrapper[5007]: E0218 20:53:24.910864 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:53:26 crc kubenswrapper[5007]: E0218 20:53:26.912902 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:53:30 crc kubenswrapper[5007]: E0218 20:53:30.913939 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:53:34 crc kubenswrapper[5007]: E0218 20:53:34.912862 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:53:35 crc kubenswrapper[5007]: I0218 20:53:35.953353 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:53:35 crc kubenswrapper[5007]: E0218 20:53:35.954025 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:53:35 crc kubenswrapper[5007]: E0218 20:53:35.956899 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:53:40 crc kubenswrapper[5007]: E0218 20:53:40.911955 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:53:45 crc kubenswrapper[5007]: E0218 20:53:45.929143 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:53:49 crc kubenswrapper[5007]: E0218 20:53:49.917144 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:53:49 crc kubenswrapper[5007]: E0218 20:53:49.917165 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:53:50 crc kubenswrapper[5007]: I0218 20:53:50.910305 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:53:50 crc kubenswrapper[5007]: E0218 20:53:50.910992 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:53:52 crc kubenswrapper[5007]: E0218 20:53:52.911285 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:53:58 crc kubenswrapper[5007]: E0218 20:53:58.912457 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:54:02 crc kubenswrapper[5007]: E0218 20:54:02.912907 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:54:03 crc kubenswrapper[5007]: E0218 20:54:03.912730 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:54:04 crc kubenswrapper[5007]: I0218 20:54:04.909766 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:54:04 crc kubenswrapper[5007]: E0218 20:54:04.910259 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:54:04 crc kubenswrapper[5007]: E0218 20:54:04.913407 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:54:09 crc kubenswrapper[5007]: E0218 20:54:09.912617 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:54:14 crc kubenswrapper[5007]: E0218 20:54:14.913403 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:54:15 crc kubenswrapper[5007]: E0218 20:54:15.912782 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:54:16 crc kubenswrapper[5007]: E0218 20:54:16.914835 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:54:18 crc kubenswrapper[5007]: I0218 20:54:18.909569 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:54:18 crc kubenswrapper[5007]: E0218 20:54:18.910590 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:54:20 crc kubenswrapper[5007]: E0218 20:54:20.913263 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:54:27 crc kubenswrapper[5007]: E0218 20:54:27.911435 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:54:27 crc kubenswrapper[5007]: E0218 20:54:27.911589 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:54:29 crc kubenswrapper[5007]: I0218 20:54:29.909487 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:54:29 crc kubenswrapper[5007]: E0218 20:54:29.911706 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:54:29 crc kubenswrapper[5007]: E0218 20:54:29.911818 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:54:31 crc kubenswrapper[5007]: E0218 20:54:31.910543 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:54:39 crc kubenswrapper[5007]: E0218 20:54:39.912785 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:54:41 crc kubenswrapper[5007]: E0218 20:54:41.914413 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:54:42 crc kubenswrapper[5007]: I0218 20:54:42.910120 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:54:42 crc kubenswrapper[5007]: E0218 20:54:42.910797 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:54:42 crc kubenswrapper[5007]: E0218 20:54:42.911561 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:54:43 crc kubenswrapper[5007]: E0218 20:54:43.912062 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:54:51 crc kubenswrapper[5007]: E0218 20:54:51.912376 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:54:52 crc kubenswrapper[5007]: E0218 20:54:52.912883 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:54:53 crc kubenswrapper[5007]: I0218 20:54:53.910162 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:54:53 crc kubenswrapper[5007]: E0218 20:54:53.910444 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:54:55 crc kubenswrapper[5007]: E0218 20:54:55.930028 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:54:57 crc kubenswrapper[5007]: E0218 20:54:57.911733 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:55:02 crc kubenswrapper[5007]: E0218 20:55:02.911920 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:55:05 crc kubenswrapper[5007]: I0218 20:55:05.924282 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:55:05 crc kubenswrapper[5007]: E0218 20:55:05.925199 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:55:06 crc kubenswrapper[5007]: E0218 20:55:06.912955 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:55:07 crc kubenswrapper[5007]: E0218 20:55:07.912160 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:55:11 crc kubenswrapper[5007]: E0218 20:55:11.913827 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:55:15 crc kubenswrapper[5007]: E0218 20:55:15.921640 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:55:18 crc kubenswrapper[5007]: E0218 20:55:18.911643 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:55:19 crc kubenswrapper[5007]: E0218 20:55:19.913077 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:55:20 crc kubenswrapper[5007]: I0218 20:55:20.912151 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:55:20 crc kubenswrapper[5007]: E0218 20:55:20.913075 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:55:23 crc kubenswrapper[5007]: E0218 20:55:23.913526 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:55:26 crc kubenswrapper[5007]: E0218 20:55:26.912571 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:55:31 crc kubenswrapper[5007]: E0218 20:55:31.913143 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:55:32 crc kubenswrapper[5007]: I0218 20:55:32.909994 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:55:32 crc kubenswrapper[5007]: E0218 20:55:32.910799 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:55:33 crc kubenswrapper[5007]: E0218 20:55:33.911959 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:55:34 crc kubenswrapper[5007]: I0218 20:55:34.912015 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:55:35 crc kubenswrapper[5007]: E0218 20:55:35.452128 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:55:35 crc kubenswrapper[5007]: E0218 20:55:35.452610 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:55:35 crc kubenswrapper[5007]: E0218 20:55:35.453748 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:55:39 crc kubenswrapper[5007]: E0218 20:55:39.915653 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:55:42 crc kubenswrapper[5007]: E0218 20:55:42.912335 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:55:44 crc kubenswrapper[5007]: I0218 20:55:44.911384 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:55:44 crc kubenswrapper[5007]: E0218 20:55:44.912494 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:55:48 crc kubenswrapper[5007]: E0218 20:55:48.913470 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:55:49 crc kubenswrapper[5007]: E0218 20:55:49.913899 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:55:55 crc kubenswrapper[5007]: E0218 20:55:55.755556 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:55:55 crc kubenswrapper[5007]: E0218 20:55:55.756500 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:55:55 crc kubenswrapper[5007]: E0218 20:55:55.757747 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:55:55 crc kubenswrapper[5007]: E0218 20:55:55.926990 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:55:57 crc kubenswrapper[5007]: I0218 20:55:57.910845 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:55:57 crc kubenswrapper[5007]: E0218 20:55:57.913477 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:55:59 crc kubenswrapper[5007]: E0218 20:55:59.912756 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:56:02 crc kubenswrapper[5007]: E0218 20:56:02.913036 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:56:03 crc kubenswrapper[5007]: E0218 20:56:03.147218 5007 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.132:59042->38.102.83.132:41997: write tcp 38.102.83.132:59042->38.102.83.132:41997: write: broken pipe Feb 18 20:56:07 crc kubenswrapper[5007]: E0218 20:56:07.913897 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:56:08 crc kubenswrapper[5007]: E0218 20:56:08.914370 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:56:10 crc kubenswrapper[5007]: I0218 20:56:10.909514 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:56:10 crc kubenswrapper[5007]: E0218 20:56:10.911849 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:56:11 crc kubenswrapper[5007]: E0218 20:56:11.912710 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:56:13 crc kubenswrapper[5007]: E0218 20:56:13.911918 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:56:19 crc kubenswrapper[5007]: E0218 20:56:19.912109 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:56:22 crc kubenswrapper[5007]: E0218 20:56:22.911213 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:56:23 crc kubenswrapper[5007]: I0218 20:56:23.910626 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:56:23 crc kubenswrapper[5007]: E0218 20:56:23.911287 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:56:25 crc kubenswrapper[5007]: E0218 20:56:25.918669 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:56:27 crc kubenswrapper[5007]: E0218 20:56:27.911900 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:56:34 crc kubenswrapper[5007]: E0218 20:56:34.914095 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:56:37 crc kubenswrapper[5007]: E0218 20:56:37.912228 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:56:38 crc kubenswrapper[5007]: I0218 20:56:38.911861 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:56:38 crc kubenswrapper[5007]: E0218 20:56:38.912951 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:56:39 crc kubenswrapper[5007]: E0218 20:56:39.913723 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:56:40 crc kubenswrapper[5007]: E0218 20:56:40.911487 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:56:47 crc kubenswrapper[5007]: E0218 20:56:47.912025 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:56:49 crc kubenswrapper[5007]: E0218 20:56:49.914176 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:56:50 crc kubenswrapper[5007]: E0218 20:56:50.912664 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:56:53 crc kubenswrapper[5007]: I0218 20:56:53.910161 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:56:53 crc kubenswrapper[5007]: E0218 20:56:53.911210 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:56:55 crc kubenswrapper[5007]: E0218 20:56:55.533103 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:56:55 crc kubenswrapper[5007]: E0218 20:56:55.534094 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:56:55 crc kubenswrapper[5007]: E0218 20:56:55.535324 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:57:02 crc kubenswrapper[5007]: E0218 20:57:02.913142 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:57:03 crc kubenswrapper[5007]: E0218 20:57:03.913653 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:57:04 crc kubenswrapper[5007]: E0218 20:57:04.912785 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:57:07 crc kubenswrapper[5007]: E0218 20:57:07.914787 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:57:08 crc kubenswrapper[5007]: I0218 20:57:08.910306 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:57:08 crc kubenswrapper[5007]: E0218 20:57:08.911131 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:57:14 crc kubenswrapper[5007]: E0218 20:57:14.911855 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:57:16 crc kubenswrapper[5007]: E0218 20:57:16.912236 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:57:17 crc kubenswrapper[5007]: E0218 20:57:17.911413 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:57:19 crc kubenswrapper[5007]: I0218 20:57:19.910590 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:57:19 crc kubenswrapper[5007]: E0218 20:57:19.911253 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:57:22 crc kubenswrapper[5007]: E0218 20:57:22.913098 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:57:28 crc kubenswrapper[5007]: E0218 20:57:28.914742 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:57:28 crc kubenswrapper[5007]: E0218 20:57:28.914740 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:57:30 crc kubenswrapper[5007]: E0218 20:57:30.913935 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:57:31 crc kubenswrapper[5007]: I0218 20:57:31.910947 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:57:31 crc kubenswrapper[5007]: E0218 20:57:31.911482 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:57:33 crc kubenswrapper[5007]: E0218 20:57:33.912193 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:57:41 crc kubenswrapper[5007]: E0218 20:57:41.912462 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:57:42 crc kubenswrapper[5007]: I0218 20:57:42.909823 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:57:42 crc kubenswrapper[5007]: E0218 20:57:42.910395 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 20:57:43 crc kubenswrapper[5007]: E0218 20:57:43.912835 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:57:44 crc kubenswrapper[5007]: E0218 20:57:44.912065 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:57:44 crc kubenswrapper[5007]: E0218 20:57:44.912155 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:57:54 crc kubenswrapper[5007]: I0218 20:57:54.911469 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 20:57:55 crc kubenswrapper[5007]: I0218 20:57:55.874152 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"c76dafda13d3d8e91643709e4490f68507411ab8c12caecfa60bdf55c7bb3c43"} Feb 18 20:57:55 crc kubenswrapper[5007]: E0218 20:57:55.911692 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:57:55 crc kubenswrapper[5007]: E0218 20:57:55.930782 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:57:57 crc kubenswrapper[5007]: E0218 20:57:57.912206 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:58:06 crc kubenswrapper[5007]: E0218 20:58:06.915022 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:58:07 crc kubenswrapper[5007]: E0218 20:58:07.945350 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:58:07 crc kubenswrapper[5007]: E0218 20:58:07.945807 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:58:07 crc kubenswrapper[5007]: E0218 20:58:07.947052 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:58:09 crc kubenswrapper[5007]: E0218 20:58:09.923194 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:58:09 crc kubenswrapper[5007]: E0218 20:58:09.923293 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:58:18 crc kubenswrapper[5007]: E0218 20:58:18.912042 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:58:18 crc kubenswrapper[5007]: E0218 20:58:18.913356 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:58:20 crc kubenswrapper[5007]: E0218 20:58:20.912199 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:58:23 crc kubenswrapper[5007]: E0218 20:58:23.911646 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:58:30 crc kubenswrapper[5007]: E0218 20:58:30.911842 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:58:32 crc kubenswrapper[5007]: E0218 20:58:32.911692 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:58:33 crc kubenswrapper[5007]: E0218 20:58:33.949613 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:58:34 crc kubenswrapper[5007]: E0218 20:58:34.911719 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:58:43 crc kubenswrapper[5007]: E0218 20:58:43.915713 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:58:45 crc kubenswrapper[5007]: E0218 20:58:45.927111 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:58:46 crc kubenswrapper[5007]: E0218 20:58:46.911385 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:58:47 crc kubenswrapper[5007]: E0218 20:58:47.910992 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:58:54 crc kubenswrapper[5007]: E0218 20:58:54.914516 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:58:57 crc kubenswrapper[5007]: E0218 20:58:57.912479 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:59:00 crc kubenswrapper[5007]: E0218 20:59:00.912373 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:59:00 crc kubenswrapper[5007]: E0218 20:59:00.912902 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:59:07 crc kubenswrapper[5007]: E0218 20:59:07.911819 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:59:11 crc kubenswrapper[5007]: E0218 20:59:11.915867 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:59:12 crc kubenswrapper[5007]: E0218 20:59:12.912793 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:59:13 crc kubenswrapper[5007]: E0218 20:59:13.914220 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:59:18 crc kubenswrapper[5007]: E0218 20:59:18.912500 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:59:23 crc kubenswrapper[5007]: E0218 20:59:23.913801 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:59:24 crc kubenswrapper[5007]: E0218 20:59:24.913137 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:59:27 crc kubenswrapper[5007]: E0218 20:59:27.914385 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:59:32 crc kubenswrapper[5007]: E0218 20:59:32.912624 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:59:37 crc kubenswrapper[5007]: E0218 20:59:37.912589 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:59:38 crc kubenswrapper[5007]: E0218 20:59:38.913132 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:59:40 crc kubenswrapper[5007]: E0218 20:59:40.912825 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:59:47 crc kubenswrapper[5007]: E0218 20:59:47.911890 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 20:59:48 crc kubenswrapper[5007]: E0218 20:59:48.914204 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 20:59:50 crc kubenswrapper[5007]: E0218 20:59:50.913737 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 20:59:53 crc kubenswrapper[5007]: E0218 20:59:53.913578 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 20:59:59 crc kubenswrapper[5007]: E0218 20:59:59.913060 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.164862 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n"] Feb 18 21:00:00 crc kubenswrapper[5007]: E0218 21:00:00.165385 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c3587-c990-4b45-b0e9-648ec99cd640" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.165405 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c3587-c990-4b45-b0e9-648ec99cd640" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.165666 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5c3587-c990-4b45-b0e9-648ec99cd640" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.166672 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.169320 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.169841 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.213800 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n"] Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.321524 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38608ac3-8740-4270-a610-420a830284ef-config-volume\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.321615 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5d6\" (UniqueName: \"kubernetes.io/projected/38608ac3-8740-4270-a610-420a830284ef-kube-api-access-tk5d6\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.321990 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38608ac3-8740-4270-a610-420a830284ef-secret-volume\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.425253 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38608ac3-8740-4270-a610-420a830284ef-config-volume\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.425361 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5d6\" (UniqueName: \"kubernetes.io/projected/38608ac3-8740-4270-a610-420a830284ef-kube-api-access-tk5d6\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.425534 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38608ac3-8740-4270-a610-420a830284ef-secret-volume\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.426595 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38608ac3-8740-4270-a610-420a830284ef-config-volume\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.437609 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38608ac3-8740-4270-a610-420a830284ef-secret-volume\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.463797 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5d6\" (UniqueName: \"kubernetes.io/projected/38608ac3-8740-4270-a610-420a830284ef-kube-api-access-tk5d6\") pod \"collect-profiles-29524140-tl75n\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:00 crc kubenswrapper[5007]: I0218 21:00:00.548109 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:01 crc kubenswrapper[5007]: I0218 21:00:01.048402 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n"] Feb 18 21:00:01 crc kubenswrapper[5007]: W0218 21:00:01.058506 5007 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38608ac3_8740_4270_a610_420a830284ef.slice/crio-1bbc938e642aedb5693617d13114ac0262c53e5e7daf5a77cdf4c6bb1af56853 WatchSource:0}: Error finding container 1bbc938e642aedb5693617d13114ac0262c53e5e7daf5a77cdf4c6bb1af56853: Status 404 returned error can't find the container with id 1bbc938e642aedb5693617d13114ac0262c53e5e7daf5a77cdf4c6bb1af56853 Feb 18 21:00:01 crc kubenswrapper[5007]: I0218 21:00:01.516949 5007 generic.go:334] "Generic (PLEG): container finished" podID="38608ac3-8740-4270-a610-420a830284ef" containerID="10564f4219f08a665d28641b5a94f405443b8440dffd2145084098124b3ee5e0" exitCode=0 Feb 18 21:00:01 crc kubenswrapper[5007]: I0218 21:00:01.517017 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" event={"ID":"38608ac3-8740-4270-a610-420a830284ef","Type":"ContainerDied","Data":"10564f4219f08a665d28641b5a94f405443b8440dffd2145084098124b3ee5e0"} Feb 18 21:00:01 crc kubenswrapper[5007]: I0218 21:00:01.517390 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" event={"ID":"38608ac3-8740-4270-a610-420a830284ef","Type":"ContainerStarted","Data":"1bbc938e642aedb5693617d13114ac0262c53e5e7daf5a77cdf4c6bb1af56853"} Feb 18 21:00:02 crc kubenswrapper[5007]: E0218 21:00:02.956629 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.299940 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.449536 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38608ac3-8740-4270-a610-420a830284ef-config-volume\") pod \"38608ac3-8740-4270-a610-420a830284ef\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.449701 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk5d6\" (UniqueName: \"kubernetes.io/projected/38608ac3-8740-4270-a610-420a830284ef-kube-api-access-tk5d6\") pod \"38608ac3-8740-4270-a610-420a830284ef\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.449752 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38608ac3-8740-4270-a610-420a830284ef-secret-volume\") pod \"38608ac3-8740-4270-a610-420a830284ef\" (UID: \"38608ac3-8740-4270-a610-420a830284ef\") " Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.450699 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38608ac3-8740-4270-a610-420a830284ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "38608ac3-8740-4270-a610-420a830284ef" (UID: "38608ac3-8740-4270-a610-420a830284ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.455768 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38608ac3-8740-4270-a610-420a830284ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "38608ac3-8740-4270-a610-420a830284ef" (UID: "38608ac3-8740-4270-a610-420a830284ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.457716 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38608ac3-8740-4270-a610-420a830284ef-kube-api-access-tk5d6" (OuterVolumeSpecName: "kube-api-access-tk5d6") pod "38608ac3-8740-4270-a610-420a830284ef" (UID: "38608ac3-8740-4270-a610-420a830284ef"). InnerVolumeSpecName "kube-api-access-tk5d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.535533 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" event={"ID":"38608ac3-8740-4270-a610-420a830284ef","Type":"ContainerDied","Data":"1bbc938e642aedb5693617d13114ac0262c53e5e7daf5a77cdf4c6bb1af56853"} Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.535590 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbc938e642aedb5693617d13114ac0262c53e5e7daf5a77cdf4c6bb1af56853" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.535667 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-tl75n" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.552367 5007 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38608ac3-8740-4270-a610-420a830284ef-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.552400 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk5d6\" (UniqueName: \"kubernetes.io/projected/38608ac3-8740-4270-a610-420a830284ef-kube-api-access-tk5d6\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[5007]: I0218 21:00:03.552410 5007 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/38608ac3-8740-4270-a610-420a830284ef-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[5007]: E0218 21:00:03.913379 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:00:04 crc kubenswrapper[5007]: I0218 21:00:04.384958 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9"] Feb 18 21:00:04 crc kubenswrapper[5007]: I0218 21:00:04.403889 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-fj9h9"] Feb 18 21:00:05 crc kubenswrapper[5007]: E0218 21:00:05.929590 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:00:05 crc kubenswrapper[5007]: I0218 21:00:05.934158 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66497243-bc9b-44fe-9ada-57a39897b3e8" path="/var/lib/kubelet/pods/66497243-bc9b-44fe-9ada-57a39897b3e8/volumes" Feb 18 21:00:13 crc kubenswrapper[5007]: E0218 21:00:13.911812 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:00:14 crc kubenswrapper[5007]: E0218 21:00:14.912198 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:00:14 crc kubenswrapper[5007]: E0218 21:00:14.914084 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:00:15 crc kubenswrapper[5007]: I0218 21:00:15.664308 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:00:15 crc kubenswrapper[5007]: I0218 21:00:15.664368 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:00:17 crc kubenswrapper[5007]: E0218 21:00:17.912256 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:00:25 crc kubenswrapper[5007]: E0218 21:00:25.918674 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:00:26 crc kubenswrapper[5007]: E0218 21:00:26.912762 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:00:27 crc kubenswrapper[5007]: E0218 21:00:27.912077 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:00:29 crc kubenswrapper[5007]: E0218 21:00:29.912701 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:00:37 crc kubenswrapper[5007]: E0218 21:00:37.913524 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:00:39 crc kubenswrapper[5007]: I0218 21:00:39.912061 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 21:00:40 crc kubenswrapper[5007]: E0218 21:00:40.911677 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:00:41 crc kubenswrapper[5007]: E0218 21:00:41.912407 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:00:45 crc kubenswrapper[5007]: I0218 21:00:45.664038 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:00:45 crc kubenswrapper[5007]: I0218 21:00:45.664405 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:00:51 crc kubenswrapper[5007]: E0218 21:00:51.913353 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:00:52 crc kubenswrapper[5007]: E0218 21:00:52.646325 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:00:52 crc kubenswrapper[5007]: E0218 21:00:52.646627 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 21:00:52 crc kubenswrapper[5007]: E0218 21:00:52.647843 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:00:53 crc kubenswrapper[5007]: E0218 21:00:53.912944 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:00:54 crc kubenswrapper[5007]: E0218 21:00:54.912186 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:00:56 crc kubenswrapper[5007]: I0218 21:00:56.338552 5007 scope.go:117] "RemoveContainer" containerID="e57da5449a57ba51d91b81003feb8fb5a2bbe7efc3b8e117485344f77959efa3" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.186922 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524141-5wsz4"] Feb 18 21:01:00 crc kubenswrapper[5007]: E0218 21:01:00.189965 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38608ac3-8740-4270-a610-420a830284ef" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.190151 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="38608ac3-8740-4270-a610-420a830284ef" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.190892 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="38608ac3-8740-4270-a610-420a830284ef" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.192189 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.199303 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-fernet-keys\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.199600 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-config-data\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.199811 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-combined-ca-bundle\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.200059 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcpgc\" (UniqueName: \"kubernetes.io/projected/3c3abcf9-8aaf-4b24-9904-34db7b60177a-kube-api-access-mcpgc\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.220321 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524141-5wsz4"] Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.302982 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcpgc\" (UniqueName: \"kubernetes.io/projected/3c3abcf9-8aaf-4b24-9904-34db7b60177a-kube-api-access-mcpgc\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.303174 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-fernet-keys\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.303579 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-config-data\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.303723 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-combined-ca-bundle\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.312185 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-fernet-keys\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.313498 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-config-data\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.314371 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-combined-ca-bundle\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.323990 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcpgc\" (UniqueName: \"kubernetes.io/projected/3c3abcf9-8aaf-4b24-9904-34db7b60177a-kube-api-access-mcpgc\") pod \"keystone-cron-29524141-5wsz4\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:00 crc kubenswrapper[5007]: I0218 21:01:00.537776 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:01 crc kubenswrapper[5007]: I0218 21:01:01.060232 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524141-5wsz4"] Feb 18 21:01:01 crc kubenswrapper[5007]: I0218 21:01:01.227484 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-5wsz4" event={"ID":"3c3abcf9-8aaf-4b24-9904-34db7b60177a","Type":"ContainerStarted","Data":"855b5e6849e3b5be04929a8fd6de95d2ea5a213ca78f28790fc5b26b293da3b5"} Feb 18 21:01:02 crc kubenswrapper[5007]: I0218 21:01:02.238523 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-5wsz4" event={"ID":"3c3abcf9-8aaf-4b24-9904-34db7b60177a","Type":"ContainerStarted","Data":"079e71950536da3ca69ce554bf1be1d2a500275bbdf43a867a5630662f51c383"} Feb 18 21:01:02 crc kubenswrapper[5007]: I0218 21:01:02.274352 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524141-5wsz4" podStartSLOduration=2.27433169 podStartE2EDuration="2.27433169s" podCreationTimestamp="2026-02-18 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 21:01:02.263957312 +0000 UTC m=+4947.046076856" watchObservedRunningTime="2026-02-18 21:01:02.27433169 +0000 UTC m=+4947.056451234" Feb 18 21:01:05 crc kubenswrapper[5007]: I0218 21:01:05.276382 5007 generic.go:334] "Generic (PLEG): container finished" podID="3c3abcf9-8aaf-4b24-9904-34db7b60177a" containerID="079e71950536da3ca69ce554bf1be1d2a500275bbdf43a867a5630662f51c383" exitCode=0 Feb 18 21:01:05 crc kubenswrapper[5007]: I0218 21:01:05.276550 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-5wsz4" event={"ID":"3c3abcf9-8aaf-4b24-9904-34db7b60177a","Type":"ContainerDied","Data":"079e71950536da3ca69ce554bf1be1d2a500275bbdf43a867a5630662f51c383"} Feb 18 21:01:05 crc kubenswrapper[5007]: E0218 21:01:05.931752 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.674722 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.763737 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-combined-ca-bundle\") pod \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.763837 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcpgc\" (UniqueName: \"kubernetes.io/projected/3c3abcf9-8aaf-4b24-9904-34db7b60177a-kube-api-access-mcpgc\") pod \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.763897 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-config-data\") pod \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.764080 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-fernet-keys\") pod \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\" (UID: \"3c3abcf9-8aaf-4b24-9904-34db7b60177a\") " Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.769907 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3abcf9-8aaf-4b24-9904-34db7b60177a-kube-api-access-mcpgc" (OuterVolumeSpecName: "kube-api-access-mcpgc") pod "3c3abcf9-8aaf-4b24-9904-34db7b60177a" (UID: "3c3abcf9-8aaf-4b24-9904-34db7b60177a"). InnerVolumeSpecName "kube-api-access-mcpgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.770320 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3c3abcf9-8aaf-4b24-9904-34db7b60177a" (UID: "3c3abcf9-8aaf-4b24-9904-34db7b60177a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.798781 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c3abcf9-8aaf-4b24-9904-34db7b60177a" (UID: "3c3abcf9-8aaf-4b24-9904-34db7b60177a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.817260 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-config-data" (OuterVolumeSpecName: "config-data") pod "3c3abcf9-8aaf-4b24-9904-34db7b60177a" (UID: "3c3abcf9-8aaf-4b24-9904-34db7b60177a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.866383 5007 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.866414 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcpgc\" (UniqueName: \"kubernetes.io/projected/3c3abcf9-8aaf-4b24-9904-34db7b60177a-kube-api-access-mcpgc\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.866438 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:06 crc kubenswrapper[5007]: I0218 21:01:06.866446 5007 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c3abcf9-8aaf-4b24-9904-34db7b60177a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:06 crc kubenswrapper[5007]: E0218 21:01:06.911828 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:01:07 crc kubenswrapper[5007]: I0218 21:01:07.301684 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-5wsz4" event={"ID":"3c3abcf9-8aaf-4b24-9904-34db7b60177a","Type":"ContainerDied","Data":"855b5e6849e3b5be04929a8fd6de95d2ea5a213ca78f28790fc5b26b293da3b5"} Feb 18 21:01:07 crc kubenswrapper[5007]: I0218 21:01:07.301881 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="855b5e6849e3b5be04929a8fd6de95d2ea5a213ca78f28790fc5b26b293da3b5" Feb 18 21:01:07 crc kubenswrapper[5007]: I0218 21:01:07.301901 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-5wsz4" Feb 18 21:01:07 crc kubenswrapper[5007]: E0218 21:01:07.911490 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:01:10 crc kubenswrapper[5007]: E0218 21:01:10.024525 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:01:10 crc kubenswrapper[5007]: E0218 21:01:10.025248 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry" logger="UnhandledError" Feb 18 21:01:10 crc kubenswrapper[5007]: E0218 21:01:10.026509 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:01:15 crc kubenswrapper[5007]: I0218 21:01:15.663867 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:01:15 crc kubenswrapper[5007]: I0218 21:01:15.666018 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:01:15 crc kubenswrapper[5007]: I0218 21:01:15.666363 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 21:01:15 crc kubenswrapper[5007]: I0218 21:01:15.667744 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c76dafda13d3d8e91643709e4490f68507411ab8c12caecfa60bdf55c7bb3c43"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 21:01:15 crc kubenswrapper[5007]: I0218 21:01:15.668074 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://c76dafda13d3d8e91643709e4490f68507411ab8c12caecfa60bdf55c7bb3c43" gracePeriod=600 Feb 18 21:01:16 crc kubenswrapper[5007]: I0218 21:01:16.445636 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="c76dafda13d3d8e91643709e4490f68507411ab8c12caecfa60bdf55c7bb3c43" exitCode=0 Feb 18 21:01:16 crc kubenswrapper[5007]: I0218 21:01:16.445726 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"c76dafda13d3d8e91643709e4490f68507411ab8c12caecfa60bdf55c7bb3c43"} Feb 18 21:01:16 crc kubenswrapper[5007]: I0218 21:01:16.446242 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a"} Feb 18 21:01:16 crc kubenswrapper[5007]: I0218 21:01:16.446267 5007 scope.go:117] "RemoveContainer" containerID="4680aed5c4a42e77735e677ce15e159583ddc57c4786e966533d12e426f5952d" Feb 18 21:01:17 crc kubenswrapper[5007]: E0218 21:01:17.912072 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:01:20 crc kubenswrapper[5007]: E0218 21:01:20.914484 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:01:20 crc kubenswrapper[5007]: E0218 21:01:20.914618 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:01:21 crc kubenswrapper[5007]: E0218 21:01:21.914656 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:01:30 crc kubenswrapper[5007]: E0218 21:01:30.912831 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:01:33 crc kubenswrapper[5007]: E0218 21:01:33.912650 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:01:33 crc kubenswrapper[5007]: E0218 21:01:33.912736 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:01:34 crc kubenswrapper[5007]: E0218 21:01:34.911709 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:01:43 crc kubenswrapper[5007]: E0218 21:01:43.912508 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:01:44 crc kubenswrapper[5007]: E0218 21:01:44.911678 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:01:46 crc kubenswrapper[5007]: E0218 21:01:46.913298 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:01:49 crc kubenswrapper[5007]: E0218 21:01:49.914564 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:01:55 crc kubenswrapper[5007]: E0218 21:01:55.930390 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:01:57 crc kubenswrapper[5007]: E0218 21:01:57.292111 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 502 Bad Gateway" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 21:01:57 crc kubenswrapper[5007]: E0218 21:01:57.292608 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Feb 18 21:01:57 crc kubenswrapper[5007]: E0218 21:01:57.293776 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 502 Bad Gateway\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:01:59 crc kubenswrapper[5007]: E0218 21:01:59.912027 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:02:01 crc kubenswrapper[5007]: E0218 21:02:01.913101 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:02:08 crc kubenswrapper[5007]: E0218 21:02:08.912689 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:02:08 crc kubenswrapper[5007]: E0218 21:02:08.912780 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:02:12 crc kubenswrapper[5007]: E0218 21:02:12.913379 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:02:13 crc kubenswrapper[5007]: E0218 21:02:13.912365 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:02:22 crc kubenswrapper[5007]: E0218 21:02:22.912252 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:02:23 crc kubenswrapper[5007]: E0218 21:02:23.914084 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:02:25 crc kubenswrapper[5007]: E0218 21:02:25.919213 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:02:27 crc kubenswrapper[5007]: E0218 21:02:27.912374 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:02:33 crc kubenswrapper[5007]: E0218 21:02:33.913913 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:02:36 crc kubenswrapper[5007]: E0218 21:02:36.911928 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:02:36 crc kubenswrapper[5007]: E0218 21:02:36.912432 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:02:40 crc kubenswrapper[5007]: E0218 21:02:40.911124 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:02:45 crc kubenswrapper[5007]: E0218 21:02:45.926556 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:02:49 crc kubenswrapper[5007]: E0218 21:02:49.913530 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:02:51 crc kubenswrapper[5007]: E0218 21:02:51.913571 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:02:55 crc kubenswrapper[5007]: E0218 21:02:55.913672 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:02:59 crc kubenswrapper[5007]: E0218 21:02:59.912366 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:03:02 crc kubenswrapper[5007]: E0218 21:03:02.913918 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:03:02 crc kubenswrapper[5007]: E0218 21:03:02.913961 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:03:07 crc kubenswrapper[5007]: E0218 21:03:07.911627 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:03:11 crc kubenswrapper[5007]: E0218 21:03:11.912399 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:03:14 crc kubenswrapper[5007]: E0218 21:03:14.096838 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 502 Bad Gateway" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 21:03:14 crc kubenswrapper[5007]: E0218 21:03:14.097355 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Feb 18 21:03:14 crc kubenswrapper[5007]: E0218 21:03:14.098562 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 502 Bad Gateway\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:03:14 crc kubenswrapper[5007]: E0218 21:03:14.912865 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:03:20 crc kubenswrapper[5007]: E0218 21:03:20.915018 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:03:25 crc kubenswrapper[5007]: E0218 21:03:25.931322 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:03:25 crc kubenswrapper[5007]: E0218 21:03:25.933034 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:03:25 crc kubenswrapper[5007]: E0218 21:03:25.933539 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:03:32 crc kubenswrapper[5007]: E0218 21:03:32.912791 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:03:37 crc kubenswrapper[5007]: E0218 21:03:37.912668 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:03:38 crc kubenswrapper[5007]: E0218 21:03:38.913591 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:03:40 crc kubenswrapper[5007]: E0218 21:03:40.911323 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:03:45 crc kubenswrapper[5007]: I0218 21:03:45.664032 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:03:45 crc kubenswrapper[5007]: I0218 21:03:45.664554 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:03:46 crc kubenswrapper[5007]: E0218 21:03:46.912154 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:03:50 crc kubenswrapper[5007]: E0218 21:03:50.913625 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:03:51 crc kubenswrapper[5007]: E0218 21:03:51.913614 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:03:54 crc kubenswrapper[5007]: E0218 21:03:54.912772 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:03:57 crc kubenswrapper[5007]: E0218 21:03:57.912824 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:04:03 crc kubenswrapper[5007]: E0218 21:04:03.912779 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:04:04 crc kubenswrapper[5007]: E0218 21:04:04.912202 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:04:06 crc kubenswrapper[5007]: E0218 21:04:06.913629 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:04:08 crc kubenswrapper[5007]: E0218 21:04:08.911744 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:04:15 crc kubenswrapper[5007]: I0218 21:04:15.663908 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:04:15 crc kubenswrapper[5007]: I0218 21:04:15.664336 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:04:18 crc kubenswrapper[5007]: E0218 21:04:18.913310 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:04:18 crc kubenswrapper[5007]: E0218 21:04:18.913364 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:04:20 crc kubenswrapper[5007]: E0218 21:04:20.914084 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:04:22 crc kubenswrapper[5007]: E0218 21:04:22.912105 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:04:29 crc kubenswrapper[5007]: E0218 21:04:29.916541 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:04:29 crc kubenswrapper[5007]: E0218 21:04:29.917115 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:04:31 crc kubenswrapper[5007]: E0218 21:04:31.913459 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:04:34 crc kubenswrapper[5007]: E0218 21:04:34.913519 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:04:40 crc kubenswrapper[5007]: E0218 21:04:40.911717 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:04:41 crc kubenswrapper[5007]: E0218 21:04:41.911285 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:04:42 crc kubenswrapper[5007]: E0218 21:04:42.912384 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.664003 5007 patch_prober.go:28] interesting pod/machine-config-daemon-d8p88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.664371 5007 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.664432 5007 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.665350 5007 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a"} pod="openshift-machine-config-operator/machine-config-daemon-d8p88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.665410 5007 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerName="machine-config-daemon" containerID="cri-o://07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" gracePeriod=600 Feb 18 21:04:45 crc kubenswrapper[5007]: E0218 21:04:45.790193 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.843623 5007 generic.go:334] "Generic (PLEG): container finished" podID="e1f79950-bc88-4358-844e-ba7f87d1b564" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" exitCode=0 Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.843668 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerDied","Data":"07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a"} Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.843704 5007 scope.go:117] "RemoveContainer" containerID="c76dafda13d3d8e91643709e4490f68507411ab8c12caecfa60bdf55c7bb3c43" Feb 18 21:04:45 crc kubenswrapper[5007]: I0218 21:04:45.844412 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:04:45 crc kubenswrapper[5007]: E0218 21:04:45.844715 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:04:49 crc kubenswrapper[5007]: E0218 21:04:49.913351 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:04:55 crc kubenswrapper[5007]: E0218 21:04:55.927970 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:04:55 crc kubenswrapper[5007]: E0218 21:04:55.928181 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:04:57 crc kubenswrapper[5007]: I0218 21:04:57.910129 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:04:57 crc kubenswrapper[5007]: E0218 21:04:57.912043 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:04:57 crc kubenswrapper[5007]: E0218 21:04:57.912669 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:05:04 crc kubenswrapper[5007]: E0218 21:05:04.913182 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:05:09 crc kubenswrapper[5007]: E0218 21:05:09.911657 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:05:10 crc kubenswrapper[5007]: I0218 21:05:10.909786 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:05:10 crc kubenswrapper[5007]: E0218 21:05:10.910513 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:05:10 crc kubenswrapper[5007]: E0218 21:05:10.912314 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:05:11 crc kubenswrapper[5007]: E0218 21:05:11.911858 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:05:17 crc kubenswrapper[5007]: E0218 21:05:17.911727 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:05:24 crc kubenswrapper[5007]: I0218 21:05:24.911516 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:05:24 crc kubenswrapper[5007]: E0218 21:05:24.913118 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:05:24 crc kubenswrapper[5007]: E0218 21:05:24.913140 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:05:24 crc kubenswrapper[5007]: E0218 21:05:24.913247 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:05:24 crc kubenswrapper[5007]: E0218 21:05:24.912834 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:05:28 crc kubenswrapper[5007]: E0218 21:05:28.913161 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:05:35 crc kubenswrapper[5007]: I0218 21:05:35.919202 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:05:35 crc kubenswrapper[5007]: E0218 21:05:35.919995 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:05:38 crc kubenswrapper[5007]: E0218 21:05:38.912783 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:05:39 crc kubenswrapper[5007]: E0218 21:05:39.912759 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:05:39 crc kubenswrapper[5007]: E0218 21:05:39.912862 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:05:41 crc kubenswrapper[5007]: E0218 21:05:41.915523 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:05:47 crc kubenswrapper[5007]: I0218 21:05:47.911751 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:05:47 crc kubenswrapper[5007]: E0218 21:05:47.913040 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:05:51 crc kubenswrapper[5007]: E0218 21:05:51.913822 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:05:52 crc kubenswrapper[5007]: I0218 21:05:52.912414 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 21:05:53 crc kubenswrapper[5007]: E0218 21:05:53.911491 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:05:54 crc kubenswrapper[5007]: E0218 21:05:54.911911 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:06:00 crc kubenswrapper[5007]: I0218 21:06:00.910799 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:06:00 crc kubenswrapper[5007]: E0218 21:06:00.911774 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:06:02 crc kubenswrapper[5007]: E0218 21:06:02.916016 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:06:06 crc kubenswrapper[5007]: E0218 21:06:06.912136 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:06:07 crc kubenswrapper[5007]: E0218 21:06:07.912417 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:06:12 crc kubenswrapper[5007]: I0218 21:06:12.910378 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:06:12 crc kubenswrapper[5007]: E0218 21:06:12.911471 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:06:13 crc kubenswrapper[5007]: E0218 21:06:13.032861 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:06:13 crc kubenswrapper[5007]: E0218 21:06:13.033130 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:06:13 crc kubenswrapper[5007]: E0218 21:06:13.034458 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:06:14 crc kubenswrapper[5007]: E0218 21:06:14.912395 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:06:17 crc kubenswrapper[5007]: E0218 21:06:17.912984 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:06:17 crc kubenswrapper[5007]: I0218 21:06:17.937879 5007 generic.go:334] "Generic (PLEG): container finished" podID="922c00a0-f93b-4c56-8597-10458c861c67" containerID="68b7e88ea273b8cfab5aaa2e08552dc834856ed4d2f554422fc0050c06c9857f" exitCode=0 Feb 18 21:06:17 crc kubenswrapper[5007]: I0218 21:06:17.938010 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"922c00a0-f93b-4c56-8597-10458c861c67","Type":"ContainerDied","Data":"68b7e88ea273b8cfab5aaa2e08552dc834856ed4d2f554422fc0050c06c9857f"} Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.326349 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.493262 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm2gl\" (UniqueName: \"kubernetes.io/projected/922c00a0-f93b-4c56-8597-10458c861c67-kube-api-access-vm2gl\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.493367 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-temporary\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.493519 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ca-certs\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.493576 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.493971 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.494294 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-config-data\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.494323 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ssh-key\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.494143 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.494422 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config-secret\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.494497 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-workdir\") pod \"922c00a0-f93b-4c56-8597-10458c861c67\" (UID: \"922c00a0-f93b-4c56-8597-10458c861c67\") " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.495397 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-config-data" (OuterVolumeSpecName: "config-data") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.496162 5007 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.496183 5007 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.499679 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.500230 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.500433 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/922c00a0-f93b-4c56-8597-10458c861c67-kube-api-access-vm2gl" (OuterVolumeSpecName: "kube-api-access-vm2gl") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "kube-api-access-vm2gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.529464 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.532928 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.544712 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.576345 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "922c00a0-f93b-4c56-8597-10458c861c67" (UID: "922c00a0-f93b-4c56-8597-10458c861c67"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.597955 5007 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.598012 5007 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.598022 5007 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.598034 5007 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.598047 5007 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/922c00a0-f93b-4c56-8597-10458c861c67-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.598058 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm2gl\" (UniqueName: \"kubernetes.io/projected/922c00a0-f93b-4c56-8597-10458c861c67-kube-api-access-vm2gl\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.598066 5007 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/922c00a0-f93b-4c56-8597-10458c861c67-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.633915 5007 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.699798 5007 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.956027 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"922c00a0-f93b-4c56-8597-10458c861c67","Type":"ContainerDied","Data":"7572b539d40f2ddeaaefbaeafd9fee84cff175583af743a90ba13804b2150dce"} Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.956070 5007 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7572b539d40f2ddeaaefbaeafd9fee84cff175583af743a90ba13804b2150dce" Feb 18 21:06:19 crc kubenswrapper[5007]: I0218 21:06:19.956078 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.195698 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:06:23 crc kubenswrapper[5007]: E0218 21:06:23.197641 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="922c00a0-f93b-4c56-8597-10458c861c67" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.197662 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="922c00a0-f93b-4c56-8597-10458c861c67" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:06:23 crc kubenswrapper[5007]: E0218 21:06:23.197681 5007 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3abcf9-8aaf-4b24-9904-34db7b60177a" containerName="keystone-cron" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.197687 5007 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3abcf9-8aaf-4b24-9904-34db7b60177a" containerName="keystone-cron" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.197866 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3abcf9-8aaf-4b24-9904-34db7b60177a" containerName="keystone-cron" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.197877 5007 memory_manager.go:354] "RemoveStaleState removing state" podUID="922c00a0-f93b-4c56-8597-10458c861c67" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.198551 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.203213 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-45s97" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.220258 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.279731 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"22886784-40fe-4c1b-984e-96550487babf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.279862 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7s4f\" (UniqueName: \"kubernetes.io/projected/22886784-40fe-4c1b-984e-96550487babf-kube-api-access-m7s4f\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"22886784-40fe-4c1b-984e-96550487babf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.381663 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"22886784-40fe-4c1b-984e-96550487babf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.381761 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7s4f\" (UniqueName: \"kubernetes.io/projected/22886784-40fe-4c1b-984e-96550487babf-kube-api-access-m7s4f\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"22886784-40fe-4c1b-984e-96550487babf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.382082 5007 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"22886784-40fe-4c1b-984e-96550487babf\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.406989 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7s4f\" (UniqueName: \"kubernetes.io/projected/22886784-40fe-4c1b-984e-96550487babf-kube-api-access-m7s4f\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"22886784-40fe-4c1b-984e-96550487babf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.408115 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"22886784-40fe-4c1b-984e-96550487babf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.538353 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:06:23 crc kubenswrapper[5007]: I0218 21:06:23.989276 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:06:24 crc kubenswrapper[5007]: I0218 21:06:24.910677 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:06:24 crc kubenswrapper[5007]: E0218 21:06:24.911013 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:06:25 crc kubenswrapper[5007]: I0218 21:06:25.008087 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"22886784-40fe-4c1b-984e-96550487babf","Type":"ContainerStarted","Data":"e53dcbb92b78ebff0ce6fb8c1953cac2af8934b233e880f797609dc6e1dd9352"} Feb 18 21:06:25 crc kubenswrapper[5007]: E0218 21:06:25.919707 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:06:26 crc kubenswrapper[5007]: E0218 21:06:26.912212 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:06:32 crc kubenswrapper[5007]: E0218 21:06:32.912371 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:06:34 crc kubenswrapper[5007]: E0218 21:06:34.112073 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway" image="quay.io/quay/busybox:latest" Feb 18 21:06:34 crc kubenswrapper[5007]: E0218 21:06:34.112669 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7s4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(22886784-40fe-4c1b-984e-96550487babf): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Feb 18 21:06:34 crc kubenswrapper[5007]: E0218 21:06:34.114081 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:06:35 crc kubenswrapper[5007]: E0218 21:06:35.150795 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:06:35 crc kubenswrapper[5007]: I0218 21:06:35.925578 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:06:35 crc kubenswrapper[5007]: E0218 21:06:35.925854 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:06:38 crc kubenswrapper[5007]: E0218 21:06:38.915871 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:06:40 crc kubenswrapper[5007]: E0218 21:06:40.018484 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:06:40 crc kubenswrapper[5007]: E0218 21:06:40.018666 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnmkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t5gwr_openshift-marketplace(d79bddda-80b1-41bc-92e9-1d41a7c7c84a): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:06:40 crc kubenswrapper[5007]: E0218 21:06:40.019874 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:06:40 crc kubenswrapper[5007]: E0218 21:06:40.913034 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:06:47 crc kubenswrapper[5007]: E0218 21:06:47.912198 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:06:49 crc kubenswrapper[5007]: E0218 21:06:49.914910 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:06:50 crc kubenswrapper[5007]: I0218 21:06:50.910241 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:06:50 crc kubenswrapper[5007]: E0218 21:06:50.911229 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:06:53 crc kubenswrapper[5007]: E0218 21:06:53.913271 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:06:54 crc kubenswrapper[5007]: E0218 21:06:54.912183 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:07:02 crc kubenswrapper[5007]: E0218 21:07:02.911804 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:07:02 crc kubenswrapper[5007]: E0218 21:07:02.911805 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:07:04 crc kubenswrapper[5007]: I0218 21:07:04.909418 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:07:04 crc kubenswrapper[5007]: E0218 21:07:04.910043 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:07:07 crc kubenswrapper[5007]: E0218 21:07:07.911308 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:07:10 crc kubenswrapper[5007]: E0218 21:07:10.035192 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:07:10 crc kubenswrapper[5007]: E0218 21:07:10.035826 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7s4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(22886784-40fe-4c1b-984e-96550487babf): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:07:10 crc kubenswrapper[5007]: E0218 21:07:10.039129 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:07:15 crc kubenswrapper[5007]: E0218 21:07:15.928425 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:07:15 crc kubenswrapper[5007]: E0218 21:07:15.928657 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:07:17 crc kubenswrapper[5007]: E0218 21:07:17.207029 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 21:07:17 crc kubenswrapper[5007]: E0218 21:07:17.207397 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qx95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjmxc_openshift-marketplace(9e85c638-b405-4e6c-baeb-5955fc34cb6e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 18 21:07:17 crc kubenswrapper[5007]: E0218 21:07:17.208803 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:07:18 crc kubenswrapper[5007]: I0218 21:07:18.910505 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:07:18 crc kubenswrapper[5007]: E0218 21:07:18.911248 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:07:18 crc kubenswrapper[5007]: E0218 21:07:18.913800 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:07:20 crc kubenswrapper[5007]: E0218 21:07:20.911969 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:07:27 crc kubenswrapper[5007]: E0218 21:07:27.917206 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:07:29 crc kubenswrapper[5007]: I0218 21:07:29.910991 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:07:29 crc kubenswrapper[5007]: E0218 21:07:29.911571 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:07:29 crc kubenswrapper[5007]: E0218 21:07:29.913136 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:07:30 crc kubenswrapper[5007]: E0218 21:07:30.914055 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:07:31 crc kubenswrapper[5007]: E0218 21:07:31.911604 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:07:40 crc kubenswrapper[5007]: I0218 21:07:40.911243 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:07:40 crc kubenswrapper[5007]: E0218 21:07:40.911418 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:07:40 crc kubenswrapper[5007]: E0218 21:07:40.912524 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:07:41 crc kubenswrapper[5007]: E0218 21:07:41.912166 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:07:42 crc kubenswrapper[5007]: E0218 21:07:42.912549 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:07:45 crc kubenswrapper[5007]: E0218 21:07:45.920649 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:07:53 crc kubenswrapper[5007]: E0218 21:07:53.914412 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:07:54 crc kubenswrapper[5007]: E0218 21:07:54.911877 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:07:55 crc kubenswrapper[5007]: I0218 21:07:55.916322 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:07:55 crc kubenswrapper[5007]: E0218 21:07:55.916730 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:07:56 crc kubenswrapper[5007]: E0218 21:07:56.060077 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:07:56 crc kubenswrapper[5007]: E0218 21:07:56.060222 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7s4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(22886784-40fe-4c1b-984e-96550487babf): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:07:56 crc kubenswrapper[5007]: E0218 21:07:56.061390 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:07:56 crc kubenswrapper[5007]: E0218 21:07:56.912784 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:07:58 crc kubenswrapper[5007]: E0218 21:07:58.912973 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:08:07 crc kubenswrapper[5007]: E0218 21:08:07.914956 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:08:09 crc kubenswrapper[5007]: I0218 21:08:09.909818 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:08:09 crc kubenswrapper[5007]: E0218 21:08:09.910484 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:08:09 crc kubenswrapper[5007]: E0218 21:08:09.912124 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:08:09 crc kubenswrapper[5007]: E0218 21:08:09.912170 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:08:10 crc kubenswrapper[5007]: E0218 21:08:10.911410 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:08:10 crc kubenswrapper[5007]: E0218 21:08:10.912010 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:08:21 crc kubenswrapper[5007]: E0218 21:08:21.914386 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:08:21 crc kubenswrapper[5007]: E0218 21:08:21.914552 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:08:22 crc kubenswrapper[5007]: I0218 21:08:22.910043 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:08:22 crc kubenswrapper[5007]: E0218 21:08:22.910670 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:08:23 crc kubenswrapper[5007]: E0218 21:08:23.913310 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:08:23 crc kubenswrapper[5007]: E0218 21:08:23.913392 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:08:32 crc kubenswrapper[5007]: E0218 21:08:32.153093 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 21:08:32 crc kubenswrapper[5007]: E0218 21:08:32.153786 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t742h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bx7n9_openshift-marketplace(527ad024-e9b5-4aaf-96cf-d8f86ac2e70d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 18 21:08:32 crc kubenswrapper[5007]: E0218 21:08:32.154928 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-marketplace-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:08:33 crc kubenswrapper[5007]: E0218 21:08:33.913186 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:08:34 crc kubenswrapper[5007]: I0218 21:08:34.910705 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:08:34 crc kubenswrapper[5007]: E0218 21:08:34.911493 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:08:35 crc kubenswrapper[5007]: E0218 21:08:35.920900 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:08:35 crc kubenswrapper[5007]: E0218 21:08:35.921637 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:08:36 crc kubenswrapper[5007]: E0218 21:08:36.913203 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:08:46 crc kubenswrapper[5007]: I0218 21:08:46.910370 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:08:46 crc kubenswrapper[5007]: E0218 21:08:46.911643 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:08:47 crc kubenswrapper[5007]: E0218 21:08:47.911729 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:08:47 crc kubenswrapper[5007]: E0218 21:08:47.911755 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:08:47 crc kubenswrapper[5007]: E0218 21:08:47.912595 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:08:48 crc kubenswrapper[5007]: E0218 21:08:48.912727 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:08:54 crc kubenswrapper[5007]: I0218 21:08:54.927741 5007 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-87qgc/must-gather-7plh7"] Feb 18 21:08:54 crc kubenswrapper[5007]: I0218 21:08:54.930040 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:08:54 crc kubenswrapper[5007]: I0218 21:08:54.932473 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-87qgc"/"openshift-service-ca.crt" Feb 18 21:08:54 crc kubenswrapper[5007]: I0218 21:08:54.932831 5007 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-87qgc"/"default-dockercfg-59927" Feb 18 21:08:54 crc kubenswrapper[5007]: I0218 21:08:54.932884 5007 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-87qgc"/"kube-root-ca.crt" Feb 18 21:08:54 crc kubenswrapper[5007]: I0218 21:08:54.937382 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-87qgc/must-gather-7plh7"] Feb 18 21:08:54 crc kubenswrapper[5007]: I0218 21:08:54.937812 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab9a7aac-2f72-4c4c-b860-922e6d508170-must-gather-output\") pod \"must-gather-7plh7\" (UID: \"ab9a7aac-2f72-4c4c-b860-922e6d508170\") " pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:08:54 crc kubenswrapper[5007]: I0218 21:08:54.937984 5007 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzfrg\" (UniqueName: \"kubernetes.io/projected/ab9a7aac-2f72-4c4c-b860-922e6d508170-kube-api-access-wzfrg\") pod \"must-gather-7plh7\" (UID: \"ab9a7aac-2f72-4c4c-b860-922e6d508170\") " pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:08:55 crc kubenswrapper[5007]: I0218 21:08:55.040382 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzfrg\" (UniqueName: \"kubernetes.io/projected/ab9a7aac-2f72-4c4c-b860-922e6d508170-kube-api-access-wzfrg\") pod \"must-gather-7plh7\" (UID: \"ab9a7aac-2f72-4c4c-b860-922e6d508170\") " pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:08:55 crc kubenswrapper[5007]: I0218 21:08:55.040547 5007 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab9a7aac-2f72-4c4c-b860-922e6d508170-must-gather-output\") pod \"must-gather-7plh7\" (UID: \"ab9a7aac-2f72-4c4c-b860-922e6d508170\") " pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:08:55 crc kubenswrapper[5007]: I0218 21:08:55.040949 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab9a7aac-2f72-4c4c-b860-922e6d508170-must-gather-output\") pod \"must-gather-7plh7\" (UID: \"ab9a7aac-2f72-4c4c-b860-922e6d508170\") " pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:08:55 crc kubenswrapper[5007]: I0218 21:08:55.095046 5007 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzfrg\" (UniqueName: \"kubernetes.io/projected/ab9a7aac-2f72-4c4c-b860-922e6d508170-kube-api-access-wzfrg\") pod \"must-gather-7plh7\" (UID: \"ab9a7aac-2f72-4c4c-b860-922e6d508170\") " pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:08:55 crc kubenswrapper[5007]: I0218 21:08:55.252487 5007 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:08:55 crc kubenswrapper[5007]: I0218 21:08:55.766152 5007 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-87qgc/must-gather-7plh7"] Feb 18 21:08:56 crc kubenswrapper[5007]: I0218 21:08:56.749349 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-87qgc/must-gather-7plh7" event={"ID":"ab9a7aac-2f72-4c4c-b860-922e6d508170","Type":"ContainerStarted","Data":"299679f79d3d6e48c7d2575c048817820bb44eaddb61d0ad51880de1fa802b0b"} Feb 18 21:08:57 crc kubenswrapper[5007]: E0218 21:08:57.021468 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:08:57 crc kubenswrapper[5007]: E0218 21:08:57.021603 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7s4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(22886784-40fe-4c1b-984e-96550487babf): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:08:57 crc kubenswrapper[5007]: E0218 21:08:57.022793 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:08:57 crc kubenswrapper[5007]: I0218 21:08:57.909877 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:08:57 crc kubenswrapper[5007]: E0218 21:08:57.910188 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:08:58 crc kubenswrapper[5007]: E0218 21:08:58.912368 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:08:58 crc kubenswrapper[5007]: E0218 21:08:58.913180 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:09:01 crc kubenswrapper[5007]: E0218 21:09:01.913929 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:09:02 crc kubenswrapper[5007]: E0218 21:09:02.910664 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:09:07 crc kubenswrapper[5007]: E0218 21:09:07.911935 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:09:09 crc kubenswrapper[5007]: E0218 21:09:09.913205 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:09:12 crc kubenswrapper[5007]: I0218 21:09:12.910119 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:09:12 crc kubenswrapper[5007]: E0218 21:09:12.911158 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:09:12 crc kubenswrapper[5007]: E0218 21:09:12.911341 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:09:15 crc kubenswrapper[5007]: E0218 21:09:15.892002 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 21:09:15 crc kubenswrapper[5007]: E0218 21:09:15.892390 5007 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 21:09:15 crc kubenswrapper[5007]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 21:09:15 crc kubenswrapper[5007]: HAVE_SESSION_TOOLS=true Feb 18 21:09:15 crc kubenswrapper[5007]: else Feb 18 21:09:15 crc kubenswrapper[5007]: HAVE_SESSION_TOOLS=false Feb 18 21:09:15 crc kubenswrapper[5007]: fi Feb 18 21:09:15 crc kubenswrapper[5007]: Feb 18 21:09:15 crc kubenswrapper[5007]: Feb 18 21:09:15 crc kubenswrapper[5007]: echo "[disk usage checker] Started" Feb 18 21:09:15 crc kubenswrapper[5007]: target_dir="/must-gather" Feb 18 21:09:15 crc kubenswrapper[5007]: usage_percentage_limit="80" Feb 18 21:09:15 crc kubenswrapper[5007]: while true; do Feb 18 21:09:15 crc kubenswrapper[5007]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 21:09:15 crc kubenswrapper[5007]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 21:09:15 crc kubenswrapper[5007]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 21:09:15 crc kubenswrapper[5007]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 21:09:15 crc kubenswrapper[5007]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 21:09:15 crc kubenswrapper[5007]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 21:09:15 crc kubenswrapper[5007]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 21:09:15 crc kubenswrapper[5007]: pkill --signal SIGKILL --session "$sid" Feb 18 21:09:15 crc kubenswrapper[5007]: done Feb 18 21:09:15 crc kubenswrapper[5007]: else Feb 18 21:09:15 crc kubenswrapper[5007]: kill 0 Feb 18 21:09:15 crc kubenswrapper[5007]: fi Feb 18 21:09:15 crc kubenswrapper[5007]: exit 1 Feb 18 21:09:15 crc kubenswrapper[5007]: fi Feb 18 21:09:15 crc kubenswrapper[5007]: sleep 5 Feb 18 21:09:15 crc kubenswrapper[5007]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 21:09:15 crc kubenswrapper[5007]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 21:09:15 crc kubenswrapper[5007]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 21:09:15 crc kubenswrapper[5007]: MUSTGATHER_EOF Feb 18 21:09:15 crc kubenswrapper[5007]: else Feb 18 21:09:15 crc kubenswrapper[5007]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 21:09:15 crc kubenswrapper[5007]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzfrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-7plh7_openshift-must-gather-87qgc(ab9a7aac-2f72-4c4c-b860-922e6d508170): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out Feb 18 21:09:15 crc kubenswrapper[5007]: > logger="UnhandledError" Feb 18 21:09:15 crc kubenswrapper[5007]: E0218 21:09:15.895350 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-87qgc/must-gather-7plh7" podUID="ab9a7aac-2f72-4c4c-b860-922e6d508170" Feb 18 21:09:15 crc kubenswrapper[5007]: E0218 21:09:15.924565 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:09:15 crc kubenswrapper[5007]: E0218 21:09:15.985086 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-87qgc/must-gather-7plh7" podUID="ab9a7aac-2f72-4c4c-b860-922e6d508170" Feb 18 21:09:16 crc kubenswrapper[5007]: E0218 21:09:16.912292 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:09:20 crc kubenswrapper[5007]: E0218 21:09:20.911398 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:09:23 crc kubenswrapper[5007]: E0218 21:09:23.913982 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:09:24 crc kubenswrapper[5007]: E0218 21:09:24.912693 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:09:26 crc kubenswrapper[5007]: I0218 21:09:26.910618 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:09:26 crc kubenswrapper[5007]: E0218 21:09:26.911686 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:09:26 crc kubenswrapper[5007]: E0218 21:09:26.913873 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:09:27 crc kubenswrapper[5007]: E0218 21:09:27.912481 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:09:30 crc kubenswrapper[5007]: I0218 21:09:30.666202 5007 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-87qgc/must-gather-7plh7"] Feb 18 21:09:30 crc kubenswrapper[5007]: I0218 21:09:30.674681 5007 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-87qgc/must-gather-7plh7"] Feb 18 21:09:31 crc kubenswrapper[5007]: E0218 21:09:31.911772 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:09:34 crc kubenswrapper[5007]: E0218 21:09:34.912253 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:09:37 crc kubenswrapper[5007]: E0218 21:09:37.911941 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:09:38 crc kubenswrapper[5007]: I0218 21:09:38.910164 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:09:38 crc kubenswrapper[5007]: E0218 21:09:38.910720 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d8p88_openshift-machine-config-operator(e1f79950-bc88-4358-844e-ba7f87d1b564)\"" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" podUID="e1f79950-bc88-4358-844e-ba7f87d1b564" Feb 18 21:09:39 crc kubenswrapper[5007]: E0218 21:09:39.912364 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:09:40 crc kubenswrapper[5007]: E0218 21:09:40.911957 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:09:43 crc kubenswrapper[5007]: E0218 21:09:43.911248 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:09:49 crc kubenswrapper[5007]: E0218 21:09:49.913351 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:09:49 crc kubenswrapper[5007]: E0218 21:09:49.913542 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:09:50 crc kubenswrapper[5007]: E0218 21:09:50.028904 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 21:09:50 crc kubenswrapper[5007]: E0218 21:09:50.029061 5007 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 21:09:50 crc kubenswrapper[5007]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 21:09:50 crc kubenswrapper[5007]: HAVE_SESSION_TOOLS=true Feb 18 21:09:50 crc kubenswrapper[5007]: else Feb 18 21:09:50 crc kubenswrapper[5007]: HAVE_SESSION_TOOLS=false Feb 18 21:09:50 crc kubenswrapper[5007]: fi Feb 18 21:09:50 crc kubenswrapper[5007]: Feb 18 21:09:50 crc kubenswrapper[5007]: Feb 18 21:09:50 crc kubenswrapper[5007]: echo "[disk usage checker] Started" Feb 18 21:09:50 crc kubenswrapper[5007]: target_dir="/must-gather" Feb 18 21:09:50 crc kubenswrapper[5007]: usage_percentage_limit="80" Feb 18 21:09:50 crc kubenswrapper[5007]: while true; do Feb 18 21:09:50 crc kubenswrapper[5007]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 21:09:50 crc kubenswrapper[5007]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 21:09:50 crc kubenswrapper[5007]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 21:09:50 crc kubenswrapper[5007]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 21:09:50 crc kubenswrapper[5007]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 21:09:50 crc kubenswrapper[5007]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 21:09:50 crc kubenswrapper[5007]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 21:09:50 crc kubenswrapper[5007]: pkill --signal SIGKILL --session "$sid" Feb 18 21:09:50 crc kubenswrapper[5007]: done Feb 18 21:09:50 crc kubenswrapper[5007]: else Feb 18 21:09:50 crc kubenswrapper[5007]: kill 0 Feb 18 21:09:50 crc kubenswrapper[5007]: fi Feb 18 21:09:50 crc kubenswrapper[5007]: exit 1 Feb 18 21:09:50 crc kubenswrapper[5007]: fi Feb 18 21:09:50 crc kubenswrapper[5007]: sleep 5 Feb 18 21:09:50 crc kubenswrapper[5007]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 21:09:50 crc kubenswrapper[5007]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 21:09:50 crc kubenswrapper[5007]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 21:09:50 crc kubenswrapper[5007]: MUSTGATHER_EOF Feb 18 21:09:50 crc kubenswrapper[5007]: else Feb 18 21:09:50 crc kubenswrapper[5007]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 21:09:50 crc kubenswrapper[5007]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzfrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-7plh7_openshift-must-gather-87qgc(ab9a7aac-2f72-4c4c-b860-922e6d508170): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out Feb 18 21:09:50 crc kubenswrapper[5007]: > logger="UnhandledError" Feb 18 21:09:50 crc kubenswrapper[5007]: E0218 21:09:50.031503 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-87qgc/must-gather-7plh7" podUID="ab9a7aac-2f72-4c4c-b860-922e6d508170" Feb 18 21:09:50 crc kubenswrapper[5007]: E0218 21:09:50.089475 5007 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"openshift-must-gather-87qgc\" not found" event="&Event{ObjectMeta:{must-gather-7plh7.1895737099b27192 openshift-must-gather-87qgc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-must-gather-87qgc,Name:must-gather-7plh7,UID:ab9a7aac-2f72-4c4c-b860-922e6d508170,APIVersion:v1,ResourceVersion:86649,FieldPath:spec.containers{gather},},Reason:Failed,Message:Failed to pull image \"quay.io/openstack-k8s-operators/openstack-must-gather:latest\": initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 21:09:15 +0000 UTC,LastTimestamp:2026-02-18 21:09:50.028971519 +0000 UTC m=+5474.811091053,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 21:09:50 crc kubenswrapper[5007]: E0218 21:09:50.148220 5007 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"openshift-must-gather-87qgc\" not found" event="&Event{ObjectMeta:{must-gather-7plh7.1895737099b2bcc4 openshift-must-gather-87qgc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-must-gather-87qgc,Name:must-gather-7plh7,UID:ab9a7aac-2f72-4c4c-b860-922e6d508170,APIVersion:v1,ResourceVersion:86649,FieldPath:spec.containers{gather},},Reason:Failed,Message:Error: ErrImagePull,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 21:09:15 +0000 UTC,LastTimestamp:2026-02-18 21:09:50.0289913 +0000 UTC m=+5474.811110834,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 21:09:50 crc kubenswrapper[5007]: E0218 21:09:50.204958 5007 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"openshift-must-gather-87qgc\" not found" event="&Event{ObjectMeta:{must-gather-7plh7.1895737099df5f4c openshift-must-gather-87qgc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-must-gather-87qgc,Name:must-gather-7plh7,UID:ab9a7aac-2f72-4c4c-b860-922e6d508170,APIVersion:v1,ResourceVersion:86649,FieldPath:spec.containers{copy},},Reason:BackOff,Message:Back-off pulling image \"quay.io/openstack-k8s-operators/openstack-must-gather:latest\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 21:09:15 +0000 UTC,LastTimestamp:2026-02-18 21:09:50.031418098 +0000 UTC m=+5474.813537642,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 21:09:50 crc kubenswrapper[5007]: E0218 21:09:50.262153 5007 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"openshift-must-gather-87qgc\" not found" event="&Event{ObjectMeta:{must-gather-7plh7.1895737099dfc069 openshift-must-gather-87qgc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-must-gather-87qgc,Name:must-gather-7plh7,UID:ab9a7aac-2f72-4c4c-b860-922e6d508170,APIVersion:v1,ResourceVersion:86649,FieldPath:spec.containers{copy},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 21:09:15 +0000 UTC,LastTimestamp:2026-02-18 21:09:50.031449539 +0000 UTC m=+5474.813569073,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 21:09:50 crc kubenswrapper[5007]: I0218 21:09:50.766420 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:09:50 crc kubenswrapper[5007]: I0218 21:09:50.861272 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzfrg\" (UniqueName: \"kubernetes.io/projected/ab9a7aac-2f72-4c4c-b860-922e6d508170-kube-api-access-wzfrg\") pod \"ab9a7aac-2f72-4c4c-b860-922e6d508170\" (UID: \"ab9a7aac-2f72-4c4c-b860-922e6d508170\") " Feb 18 21:09:50 crc kubenswrapper[5007]: I0218 21:09:50.861515 5007 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab9a7aac-2f72-4c4c-b860-922e6d508170-must-gather-output\") pod \"ab9a7aac-2f72-4c4c-b860-922e6d508170\" (UID: \"ab9a7aac-2f72-4c4c-b860-922e6d508170\") " Feb 18 21:09:50 crc kubenswrapper[5007]: I0218 21:09:50.861792 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab9a7aac-2f72-4c4c-b860-922e6d508170-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ab9a7aac-2f72-4c4c-b860-922e6d508170" (UID: "ab9a7aac-2f72-4c4c-b860-922e6d508170"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:09:50 crc kubenswrapper[5007]: I0218 21:09:50.862855 5007 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab9a7aac-2f72-4c4c-b860-922e6d508170-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 21:09:50 crc kubenswrapper[5007]: I0218 21:09:50.870766 5007 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab9a7aac-2f72-4c4c-b860-922e6d508170-kube-api-access-wzfrg" (OuterVolumeSpecName: "kube-api-access-wzfrg") pod "ab9a7aac-2f72-4c4c-b860-922e6d508170" (UID: "ab9a7aac-2f72-4c4c-b860-922e6d508170"). InnerVolumeSpecName "kube-api-access-wzfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:09:50 crc kubenswrapper[5007]: I0218 21:09:50.909906 5007 scope.go:117] "RemoveContainer" containerID="07a5aba96955f1f449134841868f75d393efd98b5e1ba83f08d931ba0973ae4a" Feb 18 21:09:50 crc kubenswrapper[5007]: I0218 21:09:50.965145 5007 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzfrg\" (UniqueName: \"kubernetes.io/projected/ab9a7aac-2f72-4c4c-b860-922e6d508170-kube-api-access-wzfrg\") on node \"crc\" DevicePath \"\"" Feb 18 21:09:51 crc kubenswrapper[5007]: I0218 21:09:51.371773 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d8p88" event={"ID":"e1f79950-bc88-4358-844e-ba7f87d1b564","Type":"ContainerStarted","Data":"22f18a4dfc7cf970f419ef07bb925614c71749321255e6e3675b59e1c10c7516"} Feb 18 21:09:51 crc kubenswrapper[5007]: I0218 21:09:51.373976 5007 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-87qgc/must-gather-7plh7" Feb 18 21:09:51 crc kubenswrapper[5007]: I0218 21:09:51.930570 5007 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab9a7aac-2f72-4c4c-b860-922e6d508170" path="/var/lib/kubelet/pods/ab9a7aac-2f72-4c4c-b860-922e6d508170/volumes" Feb 18 21:09:53 crc kubenswrapper[5007]: E0218 21:09:53.914847 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:09:54 crc kubenswrapper[5007]: E0218 21:09:54.912763 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:09:54 crc kubenswrapper[5007]: E0218 21:09:54.913476 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:10:00 crc kubenswrapper[5007]: E0218 21:10:00.913139 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:10:00 crc kubenswrapper[5007]: E0218 21:10:00.913308 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:10:07 crc kubenswrapper[5007]: E0218 21:10:07.912210 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:10:09 crc kubenswrapper[5007]: E0218 21:10:09.912324 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:10:09 crc kubenswrapper[5007]: E0218 21:10:09.912815 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:10:11 crc kubenswrapper[5007]: E0218 21:10:11.913019 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:10:13 crc kubenswrapper[5007]: E0218 21:10:13.912549 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:10:19 crc kubenswrapper[5007]: E0218 21:10:19.915660 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:10:21 crc kubenswrapper[5007]: E0218 21:10:21.911832 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:10:23 crc kubenswrapper[5007]: E0218 21:10:23.912557 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:10:28 crc kubenswrapper[5007]: E0218 21:10:28.914710 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:10:34 crc kubenswrapper[5007]: E0218 21:10:34.912450 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:10:34 crc kubenswrapper[5007]: E0218 21:10:34.912822 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:10:34 crc kubenswrapper[5007]: E0218 21:10:34.913580 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:10:41 crc kubenswrapper[5007]: E0218 21:10:41.912381 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:10:44 crc kubenswrapper[5007]: E0218 21:10:44.023652 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:10:44 crc kubenswrapper[5007]: E0218 21:10:44.024130 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7s4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(22886784-40fe-4c1b-984e-96550487babf): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:10:44 crc kubenswrapper[5007]: E0218 21:10:44.025488 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:10:45 crc kubenswrapper[5007]: E0218 21:10:45.933583 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:10:47 crc kubenswrapper[5007]: E0218 21:10:47.912747 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:10:49 crc kubenswrapper[5007]: E0218 21:10:49.912779 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:10:54 crc kubenswrapper[5007]: E0218 21:10:54.914783 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:10:54 crc kubenswrapper[5007]: E0218 21:10:54.914870 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:10:56 crc kubenswrapper[5007]: E0218 21:10:56.912602 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:10:59 crc kubenswrapper[5007]: E0218 21:10:59.911983 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:11:01 crc kubenswrapper[5007]: E0218 21:11:01.911137 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:11:05 crc kubenswrapper[5007]: E0218 21:11:05.926753 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:11:07 crc kubenswrapper[5007]: E0218 21:11:07.911361 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:11:11 crc kubenswrapper[5007]: E0218 21:11:11.914674 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:11:13 crc kubenswrapper[5007]: E0218 21:11:13.917737 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:11:13 crc kubenswrapper[5007]: E0218 21:11:13.918116 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:11:18 crc kubenswrapper[5007]: E0218 21:11:18.913532 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:11:20 crc kubenswrapper[5007]: I0218 21:11:20.913561 5007 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 21:11:26 crc kubenswrapper[5007]: E0218 21:11:26.913618 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" Feb 18 21:11:27 crc kubenswrapper[5007]: E0218 21:11:27.792647 5007 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:11:27 crc kubenswrapper[5007]: E0218 21:11:27.793265 5007 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mhwn8_openshift-marketplace(9b961f2a-acd6-470c-a5bb-3531e607c818): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" logger="UnhandledError" Feb 18 21:11:27 crc kubenswrapper[5007]: E0218 21:11:27.813035 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:11:27 crc kubenswrapper[5007]: E0218 21:11:27.910673 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:11:27 crc kubenswrapper[5007]: E0218 21:11:27.911387 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:11:30 crc kubenswrapper[5007]: E0218 21:11:30.913738 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:11:40 crc kubenswrapper[5007]: E0218 21:11:40.912715 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjmxc" podUID="9e85c638-b405-4e6c-baeb-5955fc34cb6e" Feb 18 21:11:40 crc kubenswrapper[5007]: E0218 21:11:40.912782 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-mhwn8" podUID="9b961f2a-acd6-470c-a5bb-3531e607c818" Feb 18 21:11:41 crc kubenswrapper[5007]: E0218 21:11:41.914737 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bx7n9" podUID="527ad024-e9b5-4aaf-96cf-d8f86ac2e70d" Feb 18 21:11:42 crc kubenswrapper[5007]: E0218 21:11:42.912100 5007 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="22886784-40fe-4c1b-984e-96550487babf" Feb 18 21:11:47 crc kubenswrapper[5007]: I0218 21:11:47.699026 5007 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5gwr" event={"ID":"d79bddda-80b1-41bc-92e9-1d41a7c7c84a","Type":"ContainerStarted","Data":"d81ff4162fee42bab25ffa9dcb0fe540486dab6903d4e563cf022a2df17ce622"} Feb 18 21:11:47 crc kubenswrapper[5007]: I0218 21:11:47.729331 5007 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t5gwr" podStartSLOduration=2.768350014 podStartE2EDuration="42m19.729313821s" podCreationTimestamp="2026-02-18 20:29:28 +0000 UTC" firstStartedPulling="2026-02-18 20:29:29.649269696 +0000 UTC m=+3054.431389230" lastFinishedPulling="2026-02-18 21:11:46.610233503 +0000 UTC m=+5591.392353037" observedRunningTime="2026-02-18 21:11:47.714027932 +0000 UTC m=+5592.496147476" watchObservedRunningTime="2026-02-18 21:11:47.729313821 +0000 UTC m=+5592.511433345" Feb 18 21:11:48 crc kubenswrapper[5007]: I0218 21:11:48.615595 5007 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 21:11:48 crc kubenswrapper[5007]: I0218 21:11:48.615663 5007 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t5gwr" Feb 18 21:11:49 crc kubenswrapper[5007]: I0218 21:11:49.662028 5007 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t5gwr" podUID="d79bddda-80b1-41bc-92e9-1d41a7c7c84a" containerName="registry-server" probeResult="failure" output=< Feb 18 21:11:49 crc kubenswrapper[5007]: timeout: failed to connect service ":50051" within 1s Feb 18 21:11:49 crc kubenswrapper[5007]: >